WorldWideScience

Sample records for atlas web server

  1. The Medicago truncatula gene expression atlas web server

    Directory of Open Access Journals (Sweden)

    Tang Yuhong

    2009-12-01

    Full Text Available Abstract Background Legumes (Leguminosae or Fabaceae play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible

  2. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  3. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  4. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  5. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  6. Using Web Server Logs in Evaluating Instructional Web Sites.

    Science.gov (United States)

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  7. Alignment-Annotator web server: rendering and annotating sequence alignments.

    Science.gov (United States)

    Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas

    2014-07-01

    Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  9. LocExpress: a web server for efficiently estimating expression of novel transcripts.

    Science.gov (United States)

    Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge

    2016-12-22

    The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .

  10. Web Server Configuration for an Academic Intranet

    National Research Council Canada - National Science Library

    Baltzis, Stamatios

    2000-01-01

    .... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...

  11. Implementation of SRPT Scheduling in Web Servers

    National Research Council Canada - National Science Library

    Harchol-Balter, Mor

    2000-01-01

    .... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...

  12. IBM WebSphere Application Server 80 Administration Guide

    CERN Document Server

    Robinson, Steve

    2011-01-01

    IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.

  13. WebSpy: An Architecture for Monitoring Web Server Availability in a Multi-Platform Environment

    Directory of Open Access Journals (Sweden)

    Madhan Mohan Thirukonda

    2002-01-01

    Full Text Available For an electronic business (e-business, customer satisfaction can be the difference between long-term success and short-term failure. Customer satisfaction is highly impacted by Web server availability, as customers expect a Web site to be available twenty-four hours a day and seven days a week. Unfortunately, unscheduled Web server downtime is often beyond the control of the organization. What is needed is an effective means of identifying and recovering from Web server downtime in order to minimize the negative impact on the customer. An automated architecture, called WebSpy, has been developed to notify administration and to take immediate action when Web server downtime is detected. This paper describes the WebSpy architecture and differentiates it from other popular Web monitoring tools. The results of a case study are presented as a means of demonstrating WebSpy's effectiveness in monitoring Web server availability.

  14. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  15. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  16. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  17. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  18. Getting started with Oracle WebLogic Server 12c developer's guide

    CERN Document Server

    Nunes, Fabio Mazanatti

    2013-01-01

    Getting Started with Oracle WebLogic Server 12c is a fast-paced and feature-packed book, designed to get you working with Java EE 6, JDK 7 and Oracle WebLogic Server 12c straight away, so start developing your own applications.Getting Started with Oracle WebLogic Server 12c: Developer's Guide is written for developers who are just getting started, or who have some experience, with Java EE who want to learn how to develop for and use Oracle WebLogic Server. Getting Started with Oracle WebLogic Server 12c: Developer's Guide also provides a great overview of the updated features of the 12c releas

  19. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  20. Mfold web server for nucleic acid folding and hybridization prediction.

    Science.gov (United States)

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  1. Supervisory control system implemented in programmable logical controller web server

    OpenAIRE

    Milavec, Simon

    2012-01-01

    In this thesis, we study the feasibility of supervisory control and data acquisition (SCADA) system realisation in a web server of a programmable logic controller. With the introduction of Ethernet protocol to the area of process control, the more powerful programmable logic controllers obtained integrated web servers. The web server of a programmable logic controller, produced by Siemens, will also be described in this thesis. Firstly, the software and the hardware equipment used for real...

  2. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  3. Analisis Perbandingan Unjuk Kerja Sistem Penyeimbang Beban Web Server dengan HAProxy dan Pound Links

    Directory of Open Access Journals (Sweden)

    Dite Ardian

    2013-04-01

    Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.

  4. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Multimedia

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  5. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  6. Web-Beagle: a web server for the alignment of RNA secondary structures.

    Science.gov (United States)

    Mattei, Eugenio; Pietrosanto, Marco; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2015-07-01

    Web-Beagle (http://beagle.bio.uniroma2.it) is a web server for the pairwise global or local alignment of RNA secondary structures. The server exploits a new encoding for RNA secondary structure and a substitution matrix of RNA structural elements to perform RNA structural alignments. The web server allows the user to compute up to 10 000 alignments in a single run, taking as input sets of RNA sequences and structures or primary sequences alone. In the latter case, the server computes the secondary structure prediction for the RNAs on-the-fly using RNAfold (free energy minimization). The user can also compare a set of input RNAs to one of five pre-compiled RNA datasets including lncRNAs and 3' UTRs. All types of comparison produce in output the pairwise alignments along with structural similarity and statistical significance measures for each resulting alignment. A graphical color-coded representation of the alignments allows the user to easily identify structural similarities between RNAs. Web-Beagle can be used for finding structurally related regions in two or more RNAs, for the identification of homologous regions or for functional annotation. Benchmark tests show that Web-Beagle has lower computational complexity, running time and better performances than other available methods. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Technical Note: On The Usage and Development of the AWAKE Web Server and Web Applications

    CERN Document Server

    Berger, Dillon Tanner

    2017-01-01

    The purpose of this technical note is to give a brief explanation of the AWAKE Web Server, the current web applications it serves, and how to edit, maintain, and update the source code. The majority of this paper is dedicated to the development of the server and its web applications.

  8. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins.

    Science.gov (United States)

    Konc, Janez; Janezic, Dusanka

    2012-07-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si.

  9. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  10. GASS-WEB: a web server for identifying enzyme active sites based on genetic algorithms.

    Science.gov (United States)

    Moraes, João P A; Pappa, Gisele L; Pires, Douglas E V; Izidoro, Sandro C

    2017-07-03

    Enzyme active sites are important and conserved functional regions of proteins whose identification can be an invaluable step toward protein function prediction. Most of the existing methods for this task are based on active site similarity and present limitations including performing only exact matches on template residues, template size restraints, despite not being capable of finding inter-domain active sites. To fill this gap, we proposed GASS-WEB, a user-friendly web server that uses GASS (Genetic Active Site Search), a method based on an evolutionary algorithm to search for similar active sites in proteins. GASS-WEB can be used under two different scenarios: (i) given a protein of interest, to match a set of specific active site templates; or (ii) given an active site template, looking for it in a database of protein structures. The method has shown to be very effective on a range of experiments and was able to correctly identify >90% of the catalogued active sites from the Catalytic Site Atlas. It also managed to achieve a Matthew correlation coefficient of 0.63 using the Critical Assessment of protein Structure Prediction (CASP 10) dataset. In our analysis, GASS was ranking fourth among 18 methods. GASS-WEB is freely available at http://gass.unifei.edu.br/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  12. The Role of the Web Server in a Capstone Web Application Course

    Science.gov (United States)

    Umapathy, Karthikeyan; Wallace, F. Layne

    2010-01-01

    Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…

  13. 3Drefine: an interactive web server for efficient protein structure refinement.

    Science.gov (United States)

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Dynamic thread assignment in web server performance optimization

    NARCIS (Netherlands)

    van der Weij, W.; Bhulai, S.; van der Mei, R.D.

    2009-01-01

    Popular web sites are expected to handle huge number of requests concurrently within a reasonable time frame. The performance of these web sites is largely dependent on effective thread management of their web servers. Although the implementation of static and dynamic thread policies is common

  15. PANNZER2: a rapid functional annotation web server.

    Science.gov (United States)

    Törönen, Petri; Medlar, Alan; Holm, Liisa

    2018-05-08

    The unprecedented growth of high-throughput sequencing has led to an ever-widening annotation gap in protein databases. While computational prediction methods are available to make up the shortfall, a majority of public web servers are hindered by practical limitations and poor performance. Here, we introduce PANNZER2 (Protein ANNotation with Z-scoRE), a fast functional annotation web server that provides both Gene Ontology (GO) annotations and free text description predictions. PANNZER2 uses SANSparallel to perform high-performance homology searches, making bulk annotation based on sequence similarity practical. PANNZER2 can output GO annotations from multiple scoring functions, enabling users to see which predictions are robust across predictors. Finally, PANNZER2 predictions scored within the top 10 methods for molecular function and biological process in the CAFA2 NK-full benchmark. The PANNZER2 web server is updated on a monthly schedule and is accessible at http://ekhidna2.biocenter.helsinki.fi/sanspanz/. The source code is available under the GNU Public Licence v3.

  16. CovalentDock Cloud: a web server for automated covalent docking.

    Science.gov (United States)

    Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong

    2013-07-01

    Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/.

  17. Solution for an Improved WEB Server

    Directory of Open Access Journals (Sweden)

    George PECHERLE

    2009-12-01

    Full Text Available We want to present a solution with maximum performance from a web server,in terms of services that the server provides. We do not always know what tools to useor how to configure what we have in order to get what we need. Keeping the Internetrelatedservices you provide in working condition can sometimes be a real challenge.And with the increasing demand in Internet services, we need to come up with solutionsto problems that occur every day.

  18. WebMGA: a customizable web server for fast metagenomic sequence analysis.

    Science.gov (United States)

    Wu, Sitao; Zhu, Zhengwei; Fu, Liming; Niu, Beifang; Li, Weizhong

    2011-09-07

    The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  19. WebMGA: a customizable web server for fast metagenomic sequence analysis

    Directory of Open Access Journals (Sweden)

    Niu Beifang

    2011-09-01

    Full Text Available Abstract Background The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. Results We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. Conclusions WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  20. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  1. Web Proxy Auto Discovery for the WLCG

    Science.gov (United States)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses

  2. The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan

    2005-01-01

    FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...

  3. MultiSETTER: web server for multiple RNA structure comparison.

    Science.gov (United States)

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  4. Freiburg RNA Tools: a web server integrating INTARNA, EXPARNA and LOCARNA.

    Science.gov (United States)

    Smith, Cameron; Heyne, Steffen; Richter, Andreas S; Will, Sebastian; Backofen, Rolf

    2010-07-01

    The Freiburg RNA tools web server integrates three tools for the advanced analysis of RNA in a common web-based user interface. The tools IntaRNA, ExpaRNA and LocARNA support the prediction of RNA-RNA interaction, exact RNA matching and alignment of RNA, respectively. The Freiburg RNA tools web server and the software packages of the stand-alone tools are freely accessible at http://rna.informatik.uni-freiburg.de.

  5. EnviroAtlas National Layers Master Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes...

  6. MCTBI: a web server for predicting metal ion effects in RNA structures.

    Science.gov (United States)

    Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie

    2017-08-01

    Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  7. Instant Debian build a web server

    CERN Document Server

    Parrella, Jose Miguel

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. A concise guide full of step-by-step recipes to teach you how to install and configure a Debian web server.This is an ideal book if you are an administrator on a Development Operations team or infrastructure management, who is passionate about Linux and their Web applications but have no previous experience with Debian or APT-based systems.

  8. Kelayakan Raspberry Pi sebagai Web Server: Perbandingan Kinerja Nginx, Apache, dan Lighttpd pada Platform Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Rahmad Dawood

    2014-04-01

    Full Text Available Raspberry Pi is a small-sized computer, but it can function like an ordinary computer. Because it can function like a regular PC then it is also possible to run a web server application on the Raspberry Pi. This paper will report results from testing the feasibility and performance of running a web server on the Raspberry Pi. The test was conducted on the current top three most popular web servers, which are: Apache, Nginx, and Lighttpd. The parameters used to evaluate the feasibility and performance of these web servers were: maximum request and reply time. The results from the test showed that it is feasible to run all three web servers on the Raspberry Pi but Nginx gave the best performance followed by Lighttpd and Apache.Keywords: Raspberry Pi, web server, Apache, Lighttpd, Nginx, web server performance

  9. Using Web Server Logs to Track Users through the Electronic Forest

    Science.gov (United States)

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  10. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    Science.gov (United States)

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. TBI server: a web server for predicting ion effects in RNA folding.

    Science.gov (United States)

    Zhu, Yuhong; He, Zhaojian; Chen, Shi-Jie

    2015-01-01

    Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects. The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects. By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  12. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  13. Web System for Data Quality Assessment of Tile Calorimeter During the ATLAS Operation

    International Nuclear Information System (INIS)

    Maidantchik, C; Ferreira, F; Grael, F; Sivolella, A; Balabram, L

    2011-01-01

    TileCal, the barrel hadronic calorimeter of the ATLAS experiment, gathers almost about 10,000 electronic channels. The supervision of the detector behavior is very important in order to ensure proper operation. Collaborators perform analysis over reconstructed data of calibration runs for giving detailed considerations about the equipment status. During the commissioning period, our group has developed seven web systems to support the data quality (DQ) assessment task. Each system covers a part of the process by providing information on the latest runs, displaying the DQ status from the monitoring framework, giving details about power supplies operation, presenting the generated plots and storing the validation outcomes, assisting to write logbook entries, creating and submitting the bad channels list to the conditions database and publishing the equipment performance history. The ATLAS operation increases amount of data that are retrieved, processed and stored by the web systems. In order to accomplish the new requirements, an optimized data model was designed to reduce the number of needed queries. The web systems were reassembled in a unique system in order to provide an integrated view of the validating process. The server load was minimized by using asynchronous requests from the browser.

  14. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    Science.gov (United States)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  15. The PETfold and PETcofold web servers for intra- and intermolecular structures of multiple RNA sequences

    DEFF Research Database (Denmark)

    Seemann, Ernst Stefan; Menzel, Karl Peter; Backofen, Rolf

    2011-01-01

    gene. We present web servers to analyze multiple RNA sequences for common RNA structure and for RNA interaction sites. The web servers are based on the recent PET (Probabilistic Evolutionary and Thermodynamic) models PETfold and PETcofold, but add user friendly features ranging from a graphical layer...... to interactive usage of the predictors. Additionally, the web servers provide direct access to annotated RNA alignments, such as the Rfam 10.0 database and multiple alignments of 16 vertebrate genomes with human. The web servers are freely available at: http://rth.dk/resources/petfold/...

  16. A Web-Based Airborne Remote Sensing Telemetry Server, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A Web-based Airborne Remote Sensing Telemetry Server (WARSTS) is proposed to integrate UAV telemetry and web-technology into an innovative communication, command,...

  17. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  18. Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server

    Science.gov (United States)

    2016-09-01

    tool has been developed for many platforms: Android , iOS, and Windows. The Windows version has been developed as a web server that allows the...Microsoft Windows. 15. SUBJECT TERMS Applied Anomaly Detection Tool, AADT, Windows, server, web service, installation 16. SECURITY CLASSIFICATION OF: 17...instructional information about identifying them as groups and individually. The software has been developed for several different platforms: Android

  19. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  20. CentroidFold: a web server for RNA secondary structure prediction

    OpenAIRE

    Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai

    2009-01-01

    The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...

  1. RNAiFold: a web server for RNA inverse folding and molecular design.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  2. EnviroAtlas Community Block Group Metrics Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes...

  3. EnviroAtlas Proximity to Parks Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset shows...

  4. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  5. Expitope: a web server for epitope expression.

    Science.gov (United States)

    Haase, Kerstin; Raffegerst, Silke; Schendel, Dolores J; Frishman, Dmitrij

    2015-06-01

    Adoptive T cell therapies based on introduction of new T cell receptors (TCRs) into patient recipient T cells is a promising new treatment for various kinds of cancers. A major challenge, however, is the choice of target antigens. If an engineered TCR can cross-react with self-antigens in healthy tissue, the side-effects can be devastating. We present the first web server for assessing epitope sharing when designing new potential lead targets. We enable the users to find all known proteins containing their peptide of interest. The web server returns not only exact matches, but also approximate ones, allowing a number of mismatches of the users choice. For the identified candidate proteins the expression values in various healthy tissues, representing all vital human organs, are extracted from RNA Sequencing (RNA-Seq) data as well as from some cancer tissues as control. All results are returned to the user sorted by a score, which is calculated using well-established methods and tools for immunological predictions. It depends on the probability that the epitope is created by proteasomal cleavage and its affinities to the transporter associated with antigen processing and the major histocompatibility complex class I alleles. With this framework, we hope to provide a helpful tool to exclude potential cross-reactivity in the early stage of TCR selection for use in design of adoptive T cell immunotherapy. The Expitope web server can be accessed via http://webclu.bio.wzw.tum.de/expitope. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    Science.gov (United States)

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  7. Rclick: a web server for comparison of RNA 3D structures.

    Science.gov (United States)

    Nguyen, Minh N; Verma, Chandra

    2015-03-15

    RNA molecules play important roles in key biological processes in the cell and are becoming attractive for developing therapeutic applications. Since the function of RNA depends on its structure and dynamics, comparing and classifying the RNA 3D structures is of crucial importance to molecular biology. In this study, we have developed Rclick, a web server that is capable of superimposing RNA 3D structures by using clique matching and 3D least-squares fitting. Our server Rclick has been benchmarked and compared with other popular servers and methods for RNA structural alignments. In most cases, Rclick alignments were better in terms of structure overlap. Our server also recognizes conformational changes between structures. For this purpose, the server produces complementary alignments to maximize the extent of detectable similarity. Various examples showcase the utility of our web server for comparison of RNA, RNA-protein complexes and RNA-ligand structures. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. DelPhiForce web server: electrostatic forces and energy calculations and visualization.

    Science.gov (United States)

    Li, Lin; Jia, Zhe; Peng, Yunhui; Chakravorty, Arghya; Sun, Lexuan; Alexov, Emil

    2017-11-15

    Electrostatic force is an essential component of the total force acting between atoms and macromolecules. Therefore, accurate calculations of electrostatic forces are crucial for revealing the mechanisms of many biological processes. We developed a DelPhiForce web server to calculate and visualize the electrostatic forces at molecular level. DelPhiForce web server enables modeling of electrostatic forces on individual atoms, residues, domains and molecules, and generates an output that can be visualized by VMD software. Here we demonstrate the usage of the server for various biological problems including protein-cofactor, domain-domain, protein-protein, protein-DNA and protein-RNA interactions. The DelPhiForce web server is available at: http://compbio.clemson.edu/delphi-force. delphi@clemson.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics

    Science.gov (United States)

    Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza

    2017-01-01

    Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703

  10. Amino Acid Interaction (INTAA) web server.

    Science.gov (United States)

    Galgonek, Jakub; Vymetal, Jirí; Jakubec, David; Vondrášek, Jirí

    2017-07-03

    Large biomolecules-proteins and nucleic acids-are composed of building blocks which define their identity, properties and binding capabilities. In order to shed light on the energetic side of interactions of amino acids between themselves and with deoxyribonucleotides, we present the Amino Acid Interaction web server (http://bioinfo.uochb.cas.cz/INTAA/). INTAA offers the calculation of the residue Interaction Energy Matrix for any protein structure (deposited in Protein Data Bank or submitted by the user) and a comprehensive analysis of the interfaces in protein-DNA complexes. The Interaction Energy Matrix web application aims to identify key residues within protein structures which contribute significantly to the stability of the protein. The application provides an interactive user interface enhanced by 3D structure viewer for efficient visualization of pairwise and net interaction energies of individual amino acids, side chains and backbones. The protein-DNA interaction analysis part of the web server allows the user to view the relative abundance of various configurations of amino acid-deoxyribonucleotide pairs found at the protein-DNA interface and the interaction energies corresponding to these configurations calculated using a molecular mechanical force field. The effects of the sugar-phosphate moiety and of the dielectric properties of the solvent on the interaction energies can be studied for the various configurations. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Cyber-T web server: differential analysis of high-throughput data.

    Science.gov (United States)

    Kayala, Matthew A; Baldi, Pierre

    2012-07-01

    The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.

  12. AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.

    Science.gov (United States)

    Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A

    2017-07-03

    AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Building a Library Web Server on a Budget.

    Science.gov (United States)

    Orr, Giles

    1998-01-01

    Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)

  14. PONGO: a web server for multiple predictions of all-alpha transmembrane proteins

    DEFF Research Database (Denmark)

    Amico, M.; Finelli, M.; Rossi, I.

    2006-01-01

    of the organism and more importantly with the same sequence profile for a given sequence when required. Here we present a new web server that incorporates the state-of-the-art topology predictors in a single framework, so that putative users can interactively compare and evaluate four predictions simultaneously...... for a given sequence. Together with the predicted topology, the server also displays a signal peptide prediction determined with SPEP. The PONGO web server is available at http://pongo.biocomp.unibo.it/pongo .......The annotation efforts of the BIOSAPIENS European Network of Excellence have generated several distributed annotation systems (DAS) with the aim of integrating Bioinformatics resources and annotating metazoan genomes ( http://www.biosapiens.info/ ). In this context, the PONGO DAS server ( http...

  15. The new protein topology graph library web server.

    Science.gov (United States)

    Schäfer, Tim; Scheck, Andreas; Bruneß, Daniel; May, Patrick; Koch, Ina

    2016-02-01

    We present a new, extended version of the Protein Topology Graph Library web server. The Protein Topology Graph Library describes the protein topology on the super-secondary structure level. It allows to compute and visualize protein ligand graphs and search for protein structural motifs. The new server features additional information on ligand binding to secondary structure elements, increased usability and an application programming interface (API) to retrieve data, allowing for an automated analysis of protein topology. The Protein Topology Graph Library server is freely available on the web at http://ptgl.uni-frankfurt.de. The website is implemented in PHP, JavaScript, PostgreSQL and Apache. It is supported by all major browsers. The VPLG software that was used to compute the protein ligand graphs and all other data in the database is available under the GNU public license 2.0 from http://vplg.sourceforge.net. tim.schaefer@bioinformatik.uni-frankfurt.de; ina.koch@bioinformatik.uni-frankfurt.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Studying the co-evolution of protein families with the Mirrortree web server.

    Science.gov (United States)

    Ochoa, David; Pazos, Florencio

    2010-05-15

    The Mirrortree server allows to graphically and interactively study the co-evolution of two protein families, and investigate their possible interactions and functional relationships in a taxonomic context. The server includes the possibility of starting from single sequences and hence it can be used by non-expert users. The web server is freely available at http://csbg.cnb.csic.es/mtserver. It was tested in the main web browsers. Adobe Flash Player is required at the client side to perform the interactive assessment of co-evolution. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online.

  17. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    Science.gov (United States)

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  18. NOBAI: a web server for character coding of geometrical and statistical features in RNA structure

    Science.gov (United States)

    Knudsen, Vegeir; Caetano-Anollés, Gustavo

    2008-01-01

    The Numeration of Objects in Biology: Alignment Inferences (NOBAI) web server provides a web interface to the applications in the NOBAI software package. This software codes topological and thermodynamic information related to the secondary structure of RNA molecules as multi-state phylogenetic characters, builds character matrices directly in NEXUS format and provides sequence randomization options. The web server is an effective tool that facilitates the search for evolutionary history embedded in the structure of functional RNA molecules. The NOBAI web server is accessible at ‘http://www.manet.uiuc.edu/nobai/nobai.php’. This web site is free and open to all users and there is no login requirement. PMID:18448469

  19. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  20. Using Kalman Filter to Guarantee QoS Robustness of Web Server

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The exponential growth of the Internet coupled with the increasing popularity of dynamically generated content on the World Wide Web, has created the need for more and faster Web servers capable of serving the over 100 million Internet users. To converge the control method has emerged as a promising technique to solve the Web QoS problem. In this paper, a model of adaptive session is presented and a session flow self-regulating algorism based on Kalman Filter are proposed towards Web Server. And a Web QoS self-regulating scheme is advanced. To attain the goal of on-line system identification, the optimized estimation of QoS parameters is fulfilled by utilizing Kalman Filter in full domain. The simulation results shows that the proposed scheme can guarantee the QoS with both robustness and stability .

  1. EnviroAtlas Near Road Tree Buffer Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset...

  2. PREFMD: a web server for protein structure refinement via molecular dynamics simulations.

    Science.gov (United States)

    Heo, Lim; Feig, Michael

    2018-03-15

    Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.

  3. DMINDA: an integrated web server for DNA motif identification and analyses.

    Science.gov (United States)

    Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying

    2014-07-01

    DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  5. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  6. BEAM web server: a tool for structural RNA motif discovery.

    Science.gov (United States)

    Pietrosanto, Marco; Adinolfi, Marta; Casula, Riccardo; Ausiello, Gabriele; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2018-03-15

    RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. marco.pietrosanto@uniroma2.it. Supplementary data are available at Bioinformatics online.

  7. COMAN: a web server for comprehensive metatranscriptomics analysis.

    Science.gov (United States)

    Ni, Yueqiong; Li, Jun; Panagiotou, Gianni

    2016-08-11

    Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.

  8. Comparing speed of Web Map Service with GeoServer on ESRI Shapefile and PostGIS

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2016-07-01

    Full Text Available There are several options how to configure Web Map Service using severalmap servers. GeoServer is one of most popular map servers nowadays.GeoServer is able to read data from several sources. Very popular datasource is ESRI Shapefile. It is well documented and most of softwarefor geodata processing is able to read and write data in this format.Another very popular data store is PostgreSQL/PostGIS object-relationaldatabase. Both data sources has advantages and disadvantages and userof GeoServer has to decide which one to use. The paper describescomparison of performance of GeoServer Web Map Service when readingdata from ESRI Shapefile or from PostgreSQL/PostGIS database.

  9. EnviroAtlas Impervious Proximity Gradient Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). In any given 1-square meter...

  10. A web server for analysis, comparison and prediction of protein ligand binding sites.

    Science.gov (United States)

    Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S

    2016-03-25

    One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .

  11. LDAP: a web server for lncRNA-disease association prediction.

    Science.gov (United States)

    Lan, Wei; Li, Min; Zhao, Kaijie; Liu, Jin; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin

    2017-02-01

    Increasing evidences have demonstrated that long noncoding RNAs (lncRNAs) play important roles in many human diseases. Therefore, predicting novel lncRNA-disease associations would contribute to dissect the complex mechanisms of disease pathogenesis. Some computational methods have been developed to infer lncRNA-disease associations. However, most of these methods infer lncRNA-disease associations only based on single data resource. In this paper, we propose a new computational method to predict lncRNA-disease associations by integrating multiple biological data resources. Then, we implement this method as a web server for lncRNA-disease association prediction (LDAP). The input of the LDAP server is the lncRNA sequence. The LDAP predicts potential lncRNA-disease associations by using a bagging SVM classifier based on lncRNA similarity and disease similarity. The web server is available at http://bioinformatics.csu.edu.cn/ldap jxwang@mail.csu.edu.cn. Supplementary data are available at Bioinformatics online.

  12. [Radiology information system using HTML, JavaScript, and Web server].

    Science.gov (United States)

    Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y

    1997-12-01

    We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.

  13. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  14. R3D Align web server for global nucleotide to nucleotide alignments of RNA 3D structures.

    Science.gov (United States)

    Rahrig, Ryan R; Petrov, Anton I; Leontis, Neocles B; Zirbel, Craig L

    2013-07-01

    The R3D Align web server provides online access to 'RNA 3D Align' (R3D Align), a method for producing accurate nucleotide-level structural alignments of RNA 3D structures. The web server provides a streamlined and intuitive interface, input data validation and output that is more extensive and easier to read and interpret than related servers. The R3D Align web server offers a unique Gallery of Featured Alignments, providing immediate access to pre-computed alignments of large RNA 3D structures, including all ribosomal RNAs, as well as guidance on effective use of the server and interpretation of the output. By accessing the non-redundant lists of RNA 3D structures provided by the Bowling Green State University RNA group, R3D Align connects users to structure files in the same equivalence class and the best-modeled representative structure from each group. The R3D Align web server is freely accessible at http://rna.bgsu.edu/r3dalign/.

  15. Atlas Basemaps in Web 2.0 Epoch

    Science.gov (United States)

    Chabaniuk, V.; Dyshlyk, O.

    2016-06-01

    The authors have analyzed their experience of the production of various Electronic Atlases (EA) and Atlas Information Systems (AtIS) of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others). One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography) for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the "Web 2.0 epoch". Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system - by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata: Conceptual

  16. ATLAS BASEMAPS IN WEB 2.0 EPOCH

    Directory of Open Access Journals (Sweden)

    V. Chabaniuk

    2016-06-01

    Full Text Available The authors have analyzed their experience of the production of various Electronic Atlases (EA and Atlas Information Systems (AtIS of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others. One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the “Web 2.0 epoch”. Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system – by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata

  17. Servicing the first web server - Tim Berners-Lee's NeXT

    CERN Multimedia

    unknown, Association aBCM

    2009-01-01

    In August 2009 a team from the Association aBCM in Lausanne came to CERN to give the world's first web server a health check under the watchful eye of web pioneer Robert Cailliau. They took an image of the hard drive at this time, copies of which were given to Robert Cailliau and Tim Berners-Lee.

  18. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    Science.gov (United States)

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  19. DIANA-microT web server: elucidating microRNA functions through target prediction.

    Science.gov (United States)

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  20. PELE web server: atomistic study of biomolecular systems at your fingertips.

    Science.gov (United States)

    Madadkar-Sobhani, Armin; Guallar, Victor

    2013-07-01

    PELE, Protein Energy Landscape Exploration, our novel technology based on protein structure prediction algorithms and a Monte Carlo sampling, is capable of modelling the all-atom protein-ligand dynamical interactions in an efficient and fast manner, with two orders of magnitude reduced computational cost when compared with traditional molecular dynamics techniques. PELE's heuristic approach generates trial moves based on protein and ligand perturbations followed by side chain sampling and global/local minimization. The collection of accepted steps forms a stochastic trajectory. Furthermore, several processors may be run in parallel towards a collective goal or defining several independent trajectories; the whole procedure has been parallelized using the Message Passing Interface. Here, we introduce the PELE web server, designed to make the whole process of running simulations easier and more practical by minimizing input file demand, providing user-friendly interface and producing abstract outputs (e.g. interactive graphs and tables). The web server has been implemented in C++ using Wt (http://www.webtoolkit.eu) and MySQL (http://www.mysql.com). The PELE web server, accessible at http://pele.bsc.es, is free and open to all users with no login requirement.

  1. SEGEL: A Web Server for Visualization of Smoking Effects on Human Lung Gene Expression.

    Science.gov (United States)

    Xu, Yan; Hu, Brian; Alnajm, Sammy S; Lu, Yin; Huang, Yangxin; Allen-Gipson, Diane; Cheng, Feng

    2015-01-01

    Cigarette smoking is a major cause of death worldwide resulting in over six million deaths per year. Cigarette smoke contains complex mixtures of chemicals that are harmful to nearly all organs of the human body, especially the lungs. Cigarette smoking is considered the major risk factor for many lung diseases, particularly chronic obstructive pulmonary diseases (COPD) and lung cancer. However, the underlying molecular mechanisms of smoking-induced lung injury associated with these lung diseases still remain largely unknown. Expression microarray techniques have been widely applied to detect the effects of smoking on gene expression in different human cells in the lungs. These projects have provided a lot of useful information for researchers to understand the potential molecular mechanism(s) of smoke-induced pathogenesis. However, a user-friendly web server that would allow scientists to fast query these data sets and compare the smoking effects on gene expression across different cells had not yet been established. For that reason, we have integrated eight public expression microarray data sets from trachea epithelial cells, large airway epithelial cells, small airway epithelial cells, and alveolar macrophage into an online web server called SEGEL (Smoking Effects on Gene Expression of Lung). Users can query gene expression patterns across these cells from smokers and nonsmokers by gene symbols, and find the effects of smoking on the gene expression of lungs from this web server. Sex difference in response to smoking is also shown. The relationship between the gene expression and cigarette smoking consumption were calculated and are shown in the server. The current version of SEGEL web server contains 42,400 annotated gene probe sets represented on the Affymetrix Human Genome U133 Plus 2.0 platform. SEGEL will be an invaluable resource for researchers interested in the effects of smoking on gene expression in the lungs. The server also provides useful information

  2. miRNAFold: a web server for fast miRNA precursor prediction in genomes.

    Science.gov (United States)

    Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza

    2016-07-08

    Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  4. The Web Lecture Archive Project: Archiving ATLAS Presentations and Tutorials

    CERN Multimedia

    Herr, J

    2004-01-01

    The geographical diversity of the ATLAS Collaboration presents constant challenges in the communication between and training of its members. One important example is the need for training of new collaboration members and/or current members on new developments. The Web Lecture Archive Project (WLAP), a joint project between the University of Michigan and CERN Technical Training, has addressed this challenge by recording ATLAS tutorials in the form of streamed "Web Lectures," consisting of synchronized audio, video and high-resolution slides, available on demand to anyone in the world with a Web browser. ATLAS software tutorials recorded by WLAP include ATHENA, ATLANTIS, Monte Carlo event generators, Object Oriented Analysis and Design, GEANT4, and Physics EDM and tools. All ATLAS talks, including both tutorials and meetings are available at http://www.wlap.org/browser.php?ID=atlas. Members of the University of Michigan Physics Department and Media Union, under the framework of the ATLAS Collaboratory Project ...

  5. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. EnviroAtlas Green Space Proximity Gradient Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). In any given 1-square meter...

  7. ACFIS: a web server for fragment-based drug discovery

    Science.gov (United States)

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-01-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown ‘chemical space’ to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for ‘chemical space’, which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  8. CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.

    Science.gov (United States)

    Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian

    2018-05-14

    Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.

  9. Analisis Kinerja Penerapan Container untuk Load Balancing Web Server

    Directory of Open Access Journals (Sweden)

    Muhammad Agung Nugroho

    2016-12-01

    Full Text Available Container merupakan teknologi virtualisasi terbaru. Container memudahkan system administrator dalam mengelola aplikasi pada server. Docker container dapat digunakan untuk membangun, mempersiapkan, dan menjalankan aplikasi. Dapat membuat aplikasi dari bahasa pemrograman yang berbeda pada lapisan apapun. Aplikasi dapat di bungkus dalam container, dan aplikasi dapat berjalan pada lingkungan apapun dimana saja.  Dalam perkembangannya container ini dapat digunakan untuk load balancing, dengan memanfaatkan HA Proxy. Load Balancing dapat digunakan untuk menyelesaikan permasalahan beban kinerja web server yang terlalu berat (overload terhadap permintaan. Load Balancing merupakan salah satu metode untuk meningkatkan skalabilitas web server sekaligus mengurangi beban kerja web server. Ujicoba dilakukan dengan memberikan beban request pada single container dan multi container, dan membandingkan kinerjanya. Analisis kinerja dapat dilakukan dengan menggunakan parameter performance pada processor, memori dan proses layanan. Penerapan ujicoba dilakukan pada raspberry pi. Hasil yang diperoleh multi container dapat digunakan untuk mengembangkan metode load balancing, hasil ujicoba menunjukkan performance raspberry pi dapat optimum karena pembagian beban processor.

  10. minepath.org: a free interactive pathway analysis web server.

    Science.gov (United States)

    Koumakis, Lefteris; Roussos, Panos; Potamias, George

    2017-07-03

    ( www.minepath.org ) is a web-based platform that elaborates on, and radically extends the identification of differentially expressed sub-paths in molecular pathways. Besides the network topology, the underlying MinePath algorithmic processes exploit exact gene-gene molecular relationships (e.g. activation, inhibition) and are able to identify differentially expressed pathway parts. Each pathway is decomposed into all its constituent sub-paths, which in turn are matched with corresponding gene expression profiles. The highly ranked, and phenotype inclined sub-paths are kept. Apart from the pathway analysis algorithm, the fundamental innovation of the MinePath web-server concerns its advanced visualization and interactive capabilities. To our knowledge, this is the first pathway analysis server that introduces and offers visualization of the underlying and active pathway regulatory mechanisms instead of genes. Other features include live interaction, immediate visualization of functional sub-paths per phenotype and dynamic linked annotations for the engaged genes and molecular relations. The user can download not only the results but also the corresponding web viewer framework of the performed analysis. This feature provides the flexibility to immediately publish results without publishing source/expression data, and get all the functionality of a web based pathway analysis viewer. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. CalFitter: a web server for analysis of protein thermal denaturation data.

    Science.gov (United States)

    Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri

    2018-05-14

    Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.

  12. The pepATTRACT web server for blind, large-scale peptide-protein docking.

    Science.gov (United States)

    de Vries, Sjoerd J; Rey, Julien; Schindler, Christina E M; Zacharias, Martin; Tuffery, Pierre

    2017-07-03

    Peptide-protein interactions are ubiquitous in the cell and form an important part of the interactome. Computational docking methods can complement experimental characterization of these complexes, but current protocols are not applicable on the proteome scale. pepATTRACT is a novel docking protocol that is fully blind, i.e. it does not require any information about the binding site. In various stages of its development, pepATTRACT has participated in CAPRI, making successful predictions for five out of seven protein-peptide targets. Its performance is similar or better than state-of-the-art local docking protocols that do require binding site information. Here we present a novel web server that carries out the rigid-body stage of pepATTRACT. On the peptiDB benchmark, the web server generates a correct model in the top 50 in 34% of the cases. Compared to the full pepATTRACT protocol, this leads to some loss of performance, but the computation time is reduced from ∼18 h to ∼10 min. Combined with the fact that it is fully blind, this makes the web server well-suited for large-scale in silico protein-peptide docking experiments. The rigid-body pepATTRACT server is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/services/pepATTRACT. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. The HMMER Web Server for Protein Sequence Similarity Search.

    Science.gov (United States)

    Prakash, Ananth; Jeffryes, Matt; Bateman, Alex; Finn, Robert D

    2017-12-08

    Protein sequence similarity search is one of the most commonly used bioinformatics methods for identifying evolutionarily related proteins. In general, sequences that are evolutionarily related share some degree of similarity, and sequence-search algorithms use this principle to identify homologs. The requirement for a fast and sensitive sequence search method led to the development of the HMMER software, which in the latest version (v3.1) uses a combination of sophisticated acceleration heuristics and mathematical and computational optimizations to enable the use of profile hidden Markov models (HMMs) for sequence analysis. The HMMER Web server provides a common platform by linking the HMMER algorithms to databases, thereby enabling the search for homologs, as well as providing sequence and functional annotation by linking external databases. This unit describes three basic protocols and two alternate protocols that explain how to use the HMMER Web server using various input formats and user defined parameters. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  14. EnviroAtlas 15m Riparian Buffer Forest Cover Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset...

  15. EnviroAtlas 51m Riparian Buffer Vegetated Cover Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset...

  16. EnviroAtlas - Potential Wetland Areas - Contiguous United States Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The EnviroAtlas Potential...

  17. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  18. DNA barcode goes two-dimensions: DNA QR code web server.

    Science.gov (United States)

    Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin

    2012-01-01

    The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  19. DNA barcode goes two-dimensions: DNA QR code web server.

    Directory of Open Access Journals (Sweden)

    Chang Liu

    Full Text Available The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  20. CNA web server: rigidity theory-based thermal unfolding simulations of proteins for linking structure, (thermo-)stability, and function.

    Science.gov (United States)

    Krüger, Dennis M; Rathi, Prakash Chandra; Pfleger, Christopher; Gohlke, Holger

    2013-07-01

    The Constraint Network Analysis (CNA) web server provides a user-friendly interface to the CNA approach developed in our laboratory for linking results from rigidity analyses to biologically relevant characteristics of a biomolecular structure. The CNA web server provides a refined modeling of thermal unfolding simulations that considers the temperature dependence of hydrophobic tethers and computes a set of global and local indices for quantifying biomacromolecular stability. From the global indices, phase transition points are identified where the structure switches from a rigid to a floppy state; these phase transition points can be related to a protein's (thermo-)stability. Structural weak spots (unfolding nuclei) are automatically identified, too; this knowledge can be exploited in data-driven protein engineering. The local indices are useful in linking flexibility and function and to understand the impact of ligand binding on protein flexibility. The CNA web server robustly handles small-molecule ligands in general. To overcome issues of sensitivity with respect to the input structure, the CNA web server allows performing two ensemble-based variants of thermal unfolding simulations. The web server output is provided as raw data, plots and/or Jmol representations. The CNA web server, accessible at http://cpclab.uni-duesseldorf.de/cna or http://www.cnanalysis.de, is free and open to all users with no login requirement.

  1. Analisis Perbandingan Antara Colocation Server Dengan Amazon Web Services (Cloud Untuk Usabilitas Portal Swa.co.id Di PT. Swa Media Bisnis

    Directory of Open Access Journals (Sweden)

    Lipur Sugiyanta

    2017-06-01

    Full Text Available Untuk mendukung usabilitas web portal nya, SWA Media Online menggunakan layanan web hosting Colocation Server dari Wowrack. Layanan Colocation Server dari Wowrack ini memiliki lokasi server fisik atau pusat data di Surabaya, Indonesia. Seiring berjalannya waktu, penggunaan Colocation Server dirasa semakin menghambat perkembangan perusahaan, terbukti dengan melambatnya akses ke web portal swa.co.id. Untuk itu, pada bulan Mei - Juni 2015 SWA Media Online memutuskan berpaling dari Colocation Server ke teknologi cloud terbaru. Pada akhir bulan Juni 2015, SWA Media Online resmi bermigrasi dari colocation ke Amazon Web Services. Dimana server fisik nya berada di Singapura (untuk pelanggan ASEAN. Untuk fitur yang digunakan, hampir sama seperti saat menggunakan colocation yaitu yang sesuai dengan kebutuhan perusahaan. Namun, pada Amazon Web Services memberikan service atau fitur tambahan berupa adanya load balancer, auto scaling, dan bucket atau media penyimpanan. Metodologi yang peneliti terapkan dalam penelitian ini adalah metodologi analisis secara kualitatif. Berdasarkan hasil penelitian, didapatkan hasil bahwa fitur tambahan yang diberikan Amazon Web Services mampu meningkatkan usabilitas portal dalam segi kemudahan dalam kecepatan akses portal. Kecepatan akses web portal meningkat lebih baik dibandingkan saat menggunakan Colocation Server.

  2. Dscam1 web server: online prediction of Dscam1 self- and hetero-affinity.

    Science.gov (United States)

    Marini, Simone; Nazzicari, Nelson; Biscarini, Filippo; Wang, Guang-Zhong

    2017-06-15

    Formation of homodimers by identical Dscam1 protein isomers on cell surface is the key factor for the self-avoidance of growing neurites. Dscam1 immense diversity has a critical role in the formation of arthropod neuronal circuit, showing unique evolutionary properties when compared to other cell surface proteins. Experimental measures are available for 89 self-binding and 1722 hetero-binding protein samples, out of more than 19 thousands (self-binding) and 350 millions (hetero-binding) possible isomer combinations. We developed Dscam1 Web Server to quickly predict Dscam1 self- and hetero- binding affinity for batches of Dscam1 isomers. The server can help the study of Dscam1 affinity and help researchers navigate through the tens of millions of possible isomer combinations to isolate the strong-binding ones. Dscam1 Web Server is freely available at: http://bioinformatics.tecnoparco.org/Dscam1-webserver . Web server code is available at https://gitlab.com/ne1s0n/Dscam1-binding . simone.marini@unipv.it or guangzhong.wang@picb.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    NARCIS (Netherlands)

    Van Der Schot, Gijs; Bonvin, Alexandre M J J

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on

  4. ProBiS-ligands: a web server for prediction of ligands by examination of protein binding sites.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2014-07-01

    The ProBiS-ligands web server predicts binding of ligands to a protein structure. Starting with a protein structure or binding site, ProBiS-ligands first identifies template proteins in the Protein Data Bank that share similar binding sites. Based on the superimpositions of the query protein and the similar binding sites found, the server then transposes the ligand structures from those sites to the query protein. Such ligand prediction supports many activities, e.g. drug repurposing. The ProBiS-ligands web server, an extension of the ProBiS web server, is open and free to all users at http://probis.cmm.ki.si/ligands. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. ACFIS: a web server for fragment-based drug discovery.

    Science.gov (United States)

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-07-08

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. AlignMe—a membrane protein sequence alignment web server

    Science.gov (United States)

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  7. A Web-Server of Cell Type Discrimination System

    Directory of Open Access Journals (Sweden)

    Anyou Wang

    2014-01-01

    Full Text Available Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs, induced pluripotent stem cells (iPSCs, and somatic cells (SCs. Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  8. ATLAS Metadata Interface (AMI), a generic metadata framework

    Science.gov (United States)

    Fulachier, J.; Odier, J.; Lambert, F.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  9. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    AUTHOR|(SzGeCERN)573735; The ATLAS collaboration; Odier, Jerome; Lambert, Fabian

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  10. Worldwide telemedicine services based on distributed multimedia electronic patient records by using the second generation Web server hyperwave.

    Science.gov (United States)

    Quade, G; Novotny, J; Burde, B; May, F; Beck, L E; Goldschmidt, A

    1999-01-01

    A distributed multimedia electronic patient record (EPR) is a central component of a medicine-telematics application that supports physicians working in rural areas of South America, and offers medical services to scientists in Antarctica. A Hyperwave server is used to maintain the patient record. As opposed to common web servers--and as a second generation web server--Hyperwave provides the capability of holding documents in a distributed web space without the problem of broken links. This enables physicians to browse through a patient's record by using a standard browser even if the patient's record is distributed over several servers. The patient record is basically implemented on the "Good European Health Record" (GEHR) architecture.

  11. C#: Connecting a Mobile Application to Oracle Server via Web Services

    Directory of Open Access Journals (Sweden)

    Daniela Ilea

    2008-01-01

    Full Text Available This article is focused on mobile development using Visual Studio 2005, web services and their connection to Oracle Server, willing to help programmers to realize simple and useful mobile applications.

  12. DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows.

    Science.gov (United States)

    Paraskevopoulou, Maria D; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A G

    2013-07-01

    MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA-gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines.

  13. Oracle WebLogic Server 12c advanced administration cookbook

    CERN Document Server

    Iwazaki, Dalton

    2013-01-01

    Using real life problems and simple solutions this book will make any issue seem small. WebLogic Server books can be a bit dry but Dalton keeps the tone light and ensures no matter how complex the problem you always feel like you have someone right there with you helping you along.This book is ideal for those who know the basics of WebLogic but want to dive deeper and get to grips with more advanced topics. So if you are a datacenter operator, system administrator or even a Java developer this book could be exactly what you are looking for to take you one step further with Oracle WebLogic Serv

  14. TAPIR, a web server for the prediction of plant microRNA targets, including target mimics.

    Science.gov (United States)

    Bonnet, Eric; He, Ying; Billiau, Kenny; Van de Peer, Yves

    2010-06-15

    We present a new web server called TAPIR, designed for the prediction of plant microRNA targets. The server offers the possibility to search for plant miRNA targets using a fast and a precise algorithm. The precise option is much slower but guarantees to find less perfectly paired miRNA-target duplexes. Furthermore, the precise option allows the prediction of target mimics, which are characterized by a miRNA-target duplex having a large loop, making them undetectable by traditional tools. The TAPIR web server can be accessed at: http://bioinformatics.psb.ugent.be/webtools/tapir. Supplementary data are available at Bioinformatics online.

  15. EnviroAtlas - Rare Ecosystems in the Conterminous United States Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset...

  16. EnviroAtlas - Dasymetric Population in the Conterminous United States Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset...

  17. An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned

    Science.gov (United States)

    Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)

    2000-01-01

    This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.

  18. The last ATLAS overview week now available on Web Lectures

    CERN Multimedia

    Jeremy Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the lectures and send us a note at wlap@umich.edu to tell us what you think. The newly available WLAP items relating to ATLAS is the following: ATLAS Week Plenary, CERN, 2-3 October 2006 All previous WLAP lectures are also avilable on the web.

  19. CRISPR-FOCUS: A web server for designing focused CRISPR screening experiments

    OpenAIRE

    Cao, Qingyi; Ma, Jian; Chen, Chen-Hao; Xu, Han; Chen, Zhi; Li, Wei; Liu, X. Shirley

    2017-01-01

    The recently developed CRISPR screen technology, based on the CRISPR/Cas9 genome editing system, enables genome-wide interrogation of gene functions in an efficient and cost-effective manner. Although many computational algorithms and web servers have been developed to design single-guide RNAs (sgRNAs) with high specificity and efficiency, algorithms specifically designed for conducting CRISPR screens are still lacking. Here we present CRISPR-FOCUS, a web-based platform to search and prioriti...

  20. PIQMIe: a web server for semi-quantitative proteomics data management and analysis.

    Science.gov (United States)

    Kuzniar, Arnold; Kanaar, Roland

    2014-07-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. DelPhi Web Server: A comprehensive online suite for electrostatic calculations of biological macromolecules and their complexes

    Science.gov (United States)

    Sarkar, Subhra; Witham, Shawn; Zhang, Jie; Zhenirovskyy, Maxim; Rocchia, Walter; Alexov, Emil

    2011-01-01

    Here we report a web server, the DelPhi web server, which utilizes DelPhi program to calculate electrostatic energies and the corresponding electrostatic potential and ionic distributions, and dielectric map. The server provides extra services to fix structural defects, as missing atoms in the structural file and allows for generation of missing hydrogen atoms. The hydrogen placement and the corresponding DelPhi calculations can be done with user selected force field parameters being either Charmm22, Amber98 or OPLS. Upon completion of the calculations, the user is given option to download fixed and protonated structural file, together with the parameter and Delphi output files for further analysis. Utilizing Jmol viewer, the user can see the corresponding structural file, to manipulate it and to change the presentation. In addition, if the potential map is requested to be calculated, the potential can be mapped onto the molecule surface. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver. PMID:24683424

  2. WeBIAS: a web server for publishing bioinformatics applications.

    Science.gov (United States)

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  3. SMPBS: Web server for computing biomolecular electrostatics using finite element solvers of size modified Poisson-Boltzmann equation.

    Science.gov (United States)

    Xie, Yang; Ying, Jinyong; Xie, Dexuan

    2017-03-30

    SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. TCS: a web server for multiple sequence alignment evaluation and phylogenetic reconstruction.

    Science.gov (United States)

    Chang, Jia-Ming; Di Tommaso, Paolo; Lefort, Vincent; Gascuel, Olivier; Notredame, Cedric

    2015-07-01

    This article introduces the Transitive Consistency Score (TCS) web server; a service making it possible to estimate the local reliability of protein multiple sequence alignments (MSAs) using the TCS index. The evaluation can be used to identify the aligned positions most likely to contain structurally analogous residues and also most likely to support an accurate phylogenetic reconstruction. The TCS scoring scheme has been shown to be accurate predictor of structural alignment correctness among commonly used methods. It has also been shown to outperform common filtering schemes like Gblocks or trimAl when doing MSA post-processing prior to phylogenetic tree reconstruction. The web server is available from http://tcoffee.crg.cat/tcs. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. WebSphere Application Server Step by Step

    CERN Document Server

    Cline, Owen; Van Sickel, Peter

    2012-01-01

    WebSphere Application Server (WAS) is complex and multifaceted middleware used by huge enterprises as well as small businesses. In this book, the authors do an excellent job of covering the many aspects of the software. While other books merely cover installation and configuration, this book goes beyond that to cover the critical verification and management process to ensure a successful installation and implementation. It also addresses all of the different packages-from Express to Network-so that no matter what size your company is, you will be able to successfully implement WAS V6. To de

  6. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  7. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  8. DEVELOPING WEB MAPPING APPLICATION USING ARCGIS SERVER WEB APPLICATION DEVELOPMEN FRAMEWORK (ADF FOR GEOSPATIAL DATA GENERATED DURING REHABILITATION AND RECONSTRUCTION PROCESS OF POST-TSUNAMI 2004 DISASTER IN ACEH

    Directory of Open Access Journals (Sweden)

    Nizamuddin Nizamuddin

    2014-04-01

    Full Text Available ESRI ArcGIS Server is equipped with ArcGIS Server Web Application Development Framework (ADF and ArcGIS Web Controls integration for Visual Studio.NET. Both the ArcGIS Server Manager for .NET and ArcGIS Web Controls can be easily utilized for developing the ASP.NET based ESRI Web mapping application. In  this study we implemented both tools for developing the ASP.NET based ESRI Web mapping application for geospatial data generated dring rehabilitation and reconstruction process of post-tsunami 2004 disaster in Aceh province. Rehabilitation and reconstruction process has produced a tremendous amount of geospatial data. This method was chosen in this study because in the process of developing  a web mapping application, one can easily and quickly create Mapping Services of huge geospatial data and also develop Web mapping application without writing any code. However, when utilizing Visual Studio.NET 2008, one needs to have some coding ability.

  9. StarScan: a web server for scanning small RNA targets from degradome sequencing data.

    Science.gov (United States)

    Liu, Shun; Li, Jun-Hao; Wu, Jie; Zhou, Ke-Ren; Zhou, Hui; Yang, Jian-Hua; Qu, Liang-Hu

    2015-07-01

    Endogenous small non-coding RNAs (sRNAs), including microRNAs, PIWI-interacting RNAs and small interfering RNAs, play important gene regulatory roles in animals and plants by pairing to the protein-coding and non-coding transcripts. However, computationally assigning these various sRNAs to their regulatory target genes remains technically challenging. Recently, a high-throughput degradome sequencing method was applied to identify biologically relevant sRNA cleavage sites. In this study, an integrated web-based tool, StarScan (sRNA target Scan), was developed for scanning sRNA targets using degradome sequencing data from 20 species. Given a sRNA sequence from plants or animals, our web server performs an ultrafast and exhaustive search for potential sRNA-target interactions in annotated and unannotated genomic regions. The interactions between small RNAs and target transcripts were further evaluated using a novel tool, alignScore. A novel tool, degradomeBinomTest, was developed to quantify the abundance of degradome fragments located at the 9-11th nucleotide from the sRNA 5' end. This is the first web server for discovering potential sRNA-mediated RNA cleavage events in plants and animals, which affords mechanistic insights into the regulatory roles of sRNAs. The StarScan web server is available at http://mirlab.sysu.edu.cn/starscan/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    Science.gov (United States)

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. RaptorX-Property: a web server for protein structure property prediction.

    Science.gov (United States)

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-08

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. EnviroAtlas Tree Cover Configuration and Connectivity, Water Background Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The 1-meter resolution tree...

  13. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    Science.gov (United States)

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. The use of the TWiki Web in ATLAS

    International Nuclear Information System (INIS)

    Amram, Nir; Antonelli, Stefano; Haywood, Stephen; Lloyd, Steve; Luehring, Frederick; Poulard, Gilbert

    2010-01-01

    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of 'static' HTML pages, and in the last year, there has been a significant migration to the TWiki. The TWiki is one example of the many different types of Wiki web which exist. In this paper, a description is given of the ATLAS TWiki at CERN. The tools used by the Collaboration to manage the TWiki are described and some of the problems encountered explained. A very useful development has been the creation of a set of Workbooks (Users' Guides) - these have benefitted from the TWiki environment and, in particular, a tool to extract pdf from the associated pages.

  15. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    Science.gov (United States)

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. PseKNC: a flexible web server for generating pseudo K-tuple nucleotide composition.

    Science.gov (United States)

    Chen, Wei; Lei, Tian-Yu; Jin, Dian-Chuan; Lin, Hao; Chou, Kuo-Chen

    2014-07-01

    The pseudo oligonucleotide composition, or pseudo K-tuple nucleotide composition (PseKNC), can be used to represent a DNA or RNA sequence with a discrete model or vector yet still keep considerable sequence order information, particularly the global or long-range sequence order information, via the physicochemical properties of its constituent oligonucleotides. Therefore, the PseKNC approach may hold very high potential for enhancing the power in dealing with many problems in computational genomics and genome sequence analysis. However, dealing with different DNA or RNA problems may need different kinds of PseKNC. Here, we present a flexible and user-friendly web server for PseKNC (at http://lin.uestc.edu.cn/pseknc/default.aspx) by which users can easily generate many different modes of PseKNC according to their need by selecting various parameters and physicochemical properties. Furthermore, for the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the current web server to generate their desired PseKNC without the need to follow the complicated mathematical equations, which are presented in this article just for the integrity of PseKNC formulation and its development. It is anticipated that the PseKNC web server will become a very useful tool in computational genomics and genome sequence analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    Energy Technology Data Exchange (ETDEWEB)

    Beer, Tjaart A. P. de; Laskowski, Roman A. [European Bioinformatics Institute (EMBL–EBI), Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD (United Kingdom); Duban, Mark-Eugene [Northwestern University Feinberg School of Medicine, Chicago, Illinois (United States); Chan, A. W. Edith [University College London, London WC1E 6BT (United Kingdom); Anderson, Wayne F. [Northwestern University Feinberg School of Medicine, Chicago, Illinois (United States); Thornton, Janet M., E-mail: thornton@ebi.ac.uk [European Bioinformatics Institute (EMBL–EBI), Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD (United Kingdom)

    2013-12-01

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  18. EnviroAtlas - Potentially Restorable Wetlands on Agricultural Land - Contiguous United States Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The EnviroAtlas Potentially...

  19. Design and implementation of the web Linguistic and Ethnographic Atlas of Colombia

    Science.gov (United States)

    Rocha S., Luz Angela; Bonilla, Johnatan; Bernal, Julio; Duarte, Catherine; Rodriguez, Alejandro

    2018-05-01

    The Atlas Lingüístico y Etnográfico de Colombia (Linguistic and Ethnographic Atlas of Colombia), known by "ALEC" is a compilation of popular speaking Spanish of the populations of Colombia; such research was carried out for more than fifty years. The result of this work is a collection of thematic maps organized in six volumes and its supplements in analog format. In that sense was created the project entitles "Interactive ALEC" which main objective is to develop a digital and interactive web version of the ethnographic and Linguistic Atlas of Colombia (1983) and its supplements. In this way the Corpus linguistics research group belonging to the Institute Caro y Cuervo and the research group NIDE of the Universidad Distrital "Francisco José de Caldas" have been working together in the design and development of the Atlas Web, that allows the visualization and consulting of the spatial information contained in the volume III of the analog ALEC Atlas, applying concepts of Geographical Information Systems and web cartography. Therefore, the objective of this paper is to show the process of design and development of the web prototype of the ALEC as a collection of static and dynamic maps, which show spatial information, combined with multimedia content, taking into account that in addition to all maps, the total compendium includes images, illustrations, photographs, audio and text comments. Likewise, the interactive ALEC is a good example of how to use geo-technology tools nowadays, because they are essential for the dissemination of geo linguistic information through internet, achieving more access and distribution of the Atlas web.

  20. SISTEM INFORMASI UJIAN BERBASIS WEB SERVER SMK BINA ISLAM MANDIRI (BISMA KERSANA BREBES TEGAL

    Directory of Open Access Journals (Sweden)

    M. Rifqi Tsani

    2016-09-01

    Full Text Available Seiring dengan berkembangnya kemajuan teknologi informasi, mungkin akan sangat terbantu dengan adanya internet. Dengan internet, kita akan mudah menghadirkan layanan yang dapat diakses dari manapun dan kapanpun di dunia ini. Saat ini hampir setiap orang melakukan pengaksesan informasi, salah satunya dengan situs website. SMK BISMA Kersana merupakan sekolah swasta yang cukup terkenal di daerah Kersana. Dalam tiap tahunnya selalu mengalami kesulitan dalam menyajikan soal-soal yang akan diberikan kepada murid-muridnya. Begitu juga dalam melakukan sistem pengolahan nilai, pada SMK BISMA Kersana masih begitu sederhana sehingga memerlukan waktu yang lama. Untuk mengatasi masalah tersebut, maka diperlukan suatu sistem yang terkomputerisasi untuk mendukung kemajuan dan perkembangan sekolah tersebut. Maka dirancanglah sistem ujian online dan penilaian siswa berbasis web server. Dimana para guru langsung mengolah soal-soal ujian siswa. Begitu juga para siswa setelah melakukan ujian akan bisa langsung melihat hasil ujiannya. Dalam pembuatan sistem informasi ujian berbasis web server ini metode perancangan yang digunakan adalah ADDIE (Analysis, Design, Development or Production, Implementation or Delivery and Evaluations. Dengan menggunakan sistem informasi berbasis web server ini dapat membantu guru dalam memberikan nilai pada anak didiknya yang telah melaksanakn ujian di karenakan sistem ini langsung memberikan hasil berupa nilai ujian setelah siswa melaksanakan ujian.

  1. FOLDNA, a Web Server for Self-Assembled DNA Nanostructure Autoscaffolds and Autostaples

    Directory of Open Access Journals (Sweden)

    Chensheng Zhou

    2012-01-01

    Full Text Available DNA self-assembly is a nanotechnology that folds DNA into desired shapes. Self-assembled DNA nanostructures, also known as origami, are increasingly valuable in nanomaterial and biosensing applications. Two ways to use DNA nanostructures in medicine are to form nanoarrays, and to work as vehicles in drug delivery. The DNA nanostructures perform well as a biomaterial in these areas because they have spatially addressable and size controllable properties. However, manually designing complementary DNA sequences for self-assembly is a technically demanding and time consuming task, which makes it advantageous for computers to do this job instead. We have developed a web server, FOLDNA, which can automatically design 2D self-assembled DNA nanostructures according to custom pictures and scaffold sequences provided by the users. It is the first web server to provide an entirely automatic design of self-assembled DNA nanostructure, and it takes merely a second to generate comprehensive information for molecular experiments including: scaffold DNA pathways, staple DNA directions, and staple DNA sequences. This program could save as much as several hours in the designing step for each DNA nanostructure. We randomly selected some shapes and corresponding outputs from our server and validated its performance in molecular experiments.

  2. Rucio WebUI - The Web Interface for the ATLAS Distributed Data Management

    CERN Document Server

    Beermann, Thomas; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Garonne, Vincent

    2016-01-01

    With the current distributed data management system for ATLAS, called Rucio, all user interactions, e.g. the Rucio command line tools or the ATLAS workload management system, communicate with Rucio through the same REST-API. This common interface makes it possible to interact with Rucio using a lot of different programming languages, including Javascript. Using common web application frameworks like JQuery and web.py, a web application for Rucio was built. The main component is R2D2 - the Rucio Rule Definition Droid - which gives the users a simple way to manage their data on the grid. They can search for particular datasets and get details about its metadata and available replicas and easily create rules to create new replicas and delete them if not needed anymore. On the other hand it is possible for site admins to restrict transfers to their site by setting quotas and manually approve transfers. Besides R2D2 additional features include transfer backlog monitoring for shifters, group space monitoring for gr...

  3. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    Directory of Open Access Journals (Sweden)

    Honda Kiyoshi

    2006-01-01

    Full Text Available Abstract Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium standards, including WMS (Web Map Service. WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described.

  4. ATLAS EventIndex Data Collection Supervisor and Web Interface

    CERN Document Server

    Garcia Montoro, Carlos; The ATLAS collaboration; Sanchez, Javier

    2016-01-01

    The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment [1][2] at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. This paper presents two components of the ATLAS EventIndex [3]: its data collection supervisor and its web interface partner.

  5. ATLAS EventIndex Data Collection Supervisor and Web Interface

    CERN Document Server

    Garcia Montoro, Carlos; The ATLAS collaboration

    2016-01-01

    The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid- 2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. These slides present two components of the ATLAS EventIndex: its data collection supervisor and its web interface partner.

  6. HPEPDOCK: a web server for blind peptide-protein docking based on a hierarchical algorithm.

    Science.gov (United States)

    Zhou, Pei; Jin, Bowen; Li, Hao; Huang, Sheng-You

    2018-05-09

    Protein-peptide interactions are crucial in many cellular functions. Therefore, determining the structure of protein-peptide complexes is important for understanding the molecular mechanism of related biological processes and developing peptide drugs. HPEPDOCK is a novel web server for blind protein-peptide docking through a hierarchical algorithm. Instead of running lengthy simulations to refine peptide conformations, HPEPDOCK considers the peptide flexibility through an ensemble of peptide conformations generated by our MODPEP program. For blind global peptide docking, HPEPDOCK obtained a success rate of 33.3% in binding mode prediction on a benchmark of 57 unbound cases when the top 10 models were considered, compared to 21.1% for pepATTRACT server. HPEPDOCK also performed well in docking against homology models and obtained a success rate of 29.8% within top 10 predictions. For local peptide docking, HPEPDOCK achieved a high success rate of 72.6% on a benchmark of 62 unbound cases within top 10 predictions, compared to 45.2% for HADDOCK peptide protocol. Our HPEPDOCK server is computationally efficient and consumed an average of 29.8 mins for a global peptide docking job and 14.2 mins for a local peptide docking job. The HPEPDOCK web server is available at http://huanglab.phys.hust.edu.cn/hpepdock/.

  7. HDOCK: a web server for protein–protein and protein–DNA/RNA docking based on a hybrid strategy

    Science.gov (United States)

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong

    2017-01-01

    Abstract Protein–protein and protein–DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein–protein and protein–DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10–20 min for a docking run. Tested on the cases with weakly homologous complexes of server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. PMID:28521030

  8. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    Science.gov (United States)

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  9. Super: a web server to rapidly screen superposable oligopeptide fragments from the protein data bank

    Science.gov (United States)

    Collier, James H.; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.

    2012-01-01

    Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80 500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super. PMID:22638586

  10. HDOCK: a web server for protein-protein and protein-DNA/RNA docking based on a hybrid strategy.

    Science.gov (United States)

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong; Huang, Sheng-You

    2017-07-03

    Protein-protein and protein-DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein-protein and protein-DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10-20 min for a docking run. Tested on the cases with weakly homologous complexes of server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    Science.gov (United States)

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. WAMI: a web server for the analysis of minisatellite maps

    Directory of Open Access Journals (Sweden)

    El-Kalioby Mohamed

    2010-06-01

    Full Text Available Abstract Background Minisatellites are genomic loci composed of tandem arrays of short repetitive DNA segments. A minisatellite map is a sequence of symbols that represents the tandem repeat array such that the set of symbols is in one-to-one correspondence with the set of distinct repeats. Due to variations in repeat type and organization as well as copy number, the minisatellite maps have been widely used in forensic and population studies. In either domain, researchers need to compare the set of maps to each other, to build phylogenetic trees, to spot structural variations, and to study duplication dynamics. Efficient algorithms for these tasks are required to carry them out reliably and in reasonable time. Results In this paper we present WAMI, a web-server for the analysis of minisatellite maps. It performs the above mentioned computational tasks using efficient algorithms that take the model of map evolution into account. The WAMI interface is easy to use and the results of each analysis task are visualized. Conclusions To the best of our knowledge, WAMI is the first server providing all these computational facilities to the minisatellite community. The WAMI web-interface and the source code of the underlying programs are available at http://www.nubios.nileu.edu.eg/tools/wami.

  13. GeoServer: il server geospaziale Open Source novità della nuova versione 2.3.0

    Directory of Open Access Journals (Sweden)

    Simone Giannecchini

    2013-04-01

    Full Text Available GeoServer è un server geospaziale Open Source sviluppato con tecnologia Java Enterprise per la gestione e l’editing di dati geospaziali secondo gli standard OGC e ISO Technical Committee 211. Esso fornisce le funzionalità di base per creareinfrastrutture spaziali di dati (SDI ed è progettato per essere interoperabile potendo pubblicare dati provenienti da ogni tipo di fonte spaziale utilizzando standard aperti.Open Source GeoSpatial server developed with Java Enterprise technology for managing, sharing and editing geospatial data according to the OGC and ISO TC 211 standards. GeoServer provides the basic functionalities to create spatial data infrastructures (SDI.GeoServer is designed for interoperability, it publishes data from any major spatial data source using open standards: it is the reference implementation of the Open Geospatial Consortium (OGC Web Feature Service (WFS and Web Coverage Service (WCS standards, as well as a highperformance certified compliant Web Map Service (WMS. GeoServer forms a core component of the Geospatial Web.

  14. ProTox: a web server for the in silico prediction of rodent oral toxicity.

    Science.gov (United States)

    Drwal, Malgorzata N; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R; Preissner, Robert

    2014-07-01

    Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein-ligand-based pharmacophore models ('toxicophores') for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. The visualCMAT: A web-server to select and interpret correlated mutations/co-evolving residues in protein families.

    Science.gov (United States)

    Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas

    2018-04-01

    The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https

  16. Personalized Pseudonyms for Servers in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiao Qiuyu

    2017-10-01

    Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.

  17. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    Directory of Open Access Journals (Sweden)

    Asish Kumar Dalai

    2017-01-01

    Full Text Available Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevention of SQL injection attack. The classification of SQL injection attacks has been done based on the methods used to exploit this vulnerability. The proposed method proves to be efficient in the context of its ability to prevent all types of SQL injection attacks. Some popular SQL injection attack tools and web application security datasets have been used to validate the model. The results obtained are promising with a high accuracy rate for detection of SQL injection attack.

  18. CASTp 3.0: computed atlas of surface topography of proteins.

    Science.gov (United States)

    Tian, Wei; Chen, Chang; Lei, Xue; Zhao, Jieling; Liang, Jie

    2018-06-01

    Geometric and topological properties of protein structures, including surface pockets, interior cavities and cross channels, are of fundamental importance for proteins to carry out their functions. Computed Atlas of Surface Topography of proteins (CASTp) is a web server that provides online services for locating, delineating and measuring these geometric and topological properties of protein structures. It has been widely used since its inception in 2003. In this article, we present the latest version of the web server, CASTp 3.0. CASTp 3.0 continues to provide reliable and comprehensive identifications and quantifications of protein topography. In addition, it now provides: (i) imprints of the negative volumes of pockets, cavities and channels, (ii) topographic features of biological assemblies in the Protein Data Bank, (iii) improved visualization of protein structures and pockets, and (iv) more intuitive structural and annotated information, including information of secondary structure, functional sites, variant sites and other annotations of protein residues. The CASTp 3.0 web server is freely accessible at http://sts.bioe.uic.edu/castp/.

  19. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  20. The CAD-score web server: contact area-based comparison of structures and interfaces of proteins, nucleic acids and their complexes.

    Science.gov (United States)

    Olechnovič, Kliment; Venclovas, Ceslovas

    2014-07-01

    The Contact Area Difference score (CAD-score) web server provides a universal framework to compute and analyze discrepancies between different 3D structures of the same biological macromolecule or complex. The server accepts both single-subunit and multi-subunit structures and can handle all the major types of macromolecules (proteins, RNA, DNA and their complexes). It can perform numerical comparison of both structures and interfaces. In addition to entire structures and interfaces, the server can assess user-defined subsets. The CAD-score server performs both global and local numerical evaluations of structural differences between structures or interfaces. The results can be explored interactively using sortable tables of global scores, profiles of local errors, superimposed contact maps and 3D structure visualization. The web server could be used for tasks such as comparison of models with the native (reference) structure, comparison of X-ray structures of the same macromolecule obtained in different states (e.g. with and without a bound ligand), analysis of nuclear magnetic resonance (NMR) structural ensemble or structures obtained in the course of molecular dynamics simulation. The web server is freely accessible at: http://www.ibt.lt/bioinformatics/cad-score. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Creation of a Web-Based GIS Server and Custom Geoprocessing Tools for Enhanced Hydrologic Applications

    Science.gov (United States)

    Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.

    2010-12-01

    Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster

  2. NExT server

    CERN Document Server

    1989-01-01

    The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.

  3. MetaRanker 2.0: a web server for prioritization of genetic variation data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Dworzynski, Piotr; Thomas, Cecilia Engel

    2013-01-01

    MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein–protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, Meta...

  4. mirVAFC: A Web Server for Prioritizations of Pathogenic Sequence Variants from Exome Sequencing Data via Classifications.

    Science.gov (United States)

    Li, Zhongshan; Liu, Zhenwei; Jiang, Yi; Chen, Denghui; Ran, Xia; Sun, Zhong Sheng; Wu, Jinyu

    2017-01-01

    Exome sequencing has been widely used to identify the genetic variants underlying human genetic disorders for clinical diagnoses, but the identification of pathogenic sequence variants among the huge amounts of benign ones is complicated and challenging. Here, we describe a new Web server named mirVAFC for pathogenic sequence variants prioritizations from clinical exome sequencing (CES) variant data of single individual or family. The mirVAFC is able to comprehensively annotate sequence variants, filter out most irrelevant variants using custom criteria, classify variants into different categories as for estimated pathogenicity, and lastly provide pathogenic variants prioritizations based on classifications and mutation effects. Case studies using different types of datasets for different diseases from publication and our in-house data have revealed that mirVAFC can efficiently identify the right pathogenic candidates as in original work in each case. Overall, the Web server mirVAFC is specifically developed for pathogenic sequence variant identifications from family-based CES variants using classification-based prioritizations. The mirVAFC Web server is freely accessible at https://www.wzgenomics.cn/mirVAFC/. © 2016 WILEY PERIODICALS, INC.

  5. ATLAS Live: Collaborative Information Streams

    Energy Technology Data Exchange (ETDEWEB)

    Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Collaboration: ATLAS Collaboration

    2011-12-23

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  6. ATLAS Live: Collaborative Information Streams

    International Nuclear Information System (INIS)

    Goldfarb, Steven

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  7. Visualization of historical data for the ATLAS detector controls - DDV

    Science.gov (United States)

    Maciejewski, J.; Schlenker, S.

    2017-10-01

    The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.

  8. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    Science.gov (United States)

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  9. Planetary Data Systems (PDS) Imaging Node Atlas II

    Science.gov (United States)

    Stanboli, Alice; McAuley, James M.

    2013-01-01

    The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.

  10. FireProt: web server for automated design of thermostable proteins

    Science.gov (United States)

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  11. Catalytic site identification—a web server to identify catalytic site structural matches throughout PDB

    Science.gov (United States)

    Kirshner, Daniel A.; Nilmeier, Jerome P.; Lightstone, Felice C.

    2013-01-01

    The catalytic site identification web server provides the innovative capability to find structural matches to a user-specified catalytic site among all Protein Data Bank proteins rapidly (in less than a minute). The server also can examine a user-specified protein structure or model to identify structural matches to a library of catalytic sites. Finally, the server provides a database of pre-calculated matches between all Protein Data Bank proteins and the library of catalytic sites. The database has been used to derive a set of hypothesized novel enzymatic function annotations. In all cases, matches and putative binding sites (protein structure and surfaces) can be visualized interactively online. The website can be accessed at http://catsid.llnl.gov. PMID:23680785

  12. 3USS: a web server for detecting alternative 3'UTRs from RNA-seq experiments.

    KAUST Repository

    Le Pera, Loredana; Mazzapioda, Mariagiovanna; Tramontano, Anna

    2015-01-01

    Protein-coding genes with multiple alternative polyadenylation sites can generate mRNA 3'UTR sequences of different lengths, thereby causing the loss or gain of regulatory elements, which can affect stability, localization and translation efficiency. 3USS is a web-server developed with the aim of giving experimentalists the possibility to automatically identify alternative 3 ': UTRs (shorter or longer with respect to a reference transcriptome), an option that is not available in standard RNA-seq data analysis procedures. The tool reports as putative novel the 3 ': UTRs not annotated in available databases. Furthermore, if data from two related samples are uploaded, common and specific alternative 3 ': UTRs are identified and reported by the server.3USS is freely available at http://www.biocomputing.it/3uss_server.

  13. 3USS: a web server for detecting alternative 3'UTRs from RNA-seq experiments.

    KAUST Repository

    Le Pera, Loredana

    2015-01-22

    Protein-coding genes with multiple alternative polyadenylation sites can generate mRNA 3\\'UTR sequences of different lengths, thereby causing the loss or gain of regulatory elements, which can affect stability, localization and translation efficiency. 3USS is a web-server developed with the aim of giving experimentalists the possibility to automatically identify alternative 3 \\': UTRs (shorter or longer with respect to a reference transcriptome), an option that is not available in standard RNA-seq data analysis procedures. The tool reports as putative novel the 3 \\': UTRs not annotated in available databases. Furthermore, if data from two related samples are uploaded, common and specific alternative 3 \\': UTRs are identified and reported by the server.3USS is freely available at http://www.biocomputing.it/3uss_server.

  14. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR.

    Science.gov (United States)

    van der Schot, Gijs; Bonvin, Alexandre M J J

    2015-08-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665-1674, 2005b, doi: 10.1021/ja047109h). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27-35, 2013, doi: 10.1007/s10858-013-9762-6), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  15. DCS data viewer, an application that accesses ATLAS DCS historical data

    International Nuclear Information System (INIS)

    Tsarouchas, C; Schlenker, S; Dimitrov, G; Jahn, G

    2014-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  16. Dcs Data Viewer, an Application that Accesses ATLAS DCS Historical Data

    Science.gov (United States)

    Tsarouchas, C.; Schlenker, S.; Dimitrov, G.; Jahn, G.

    2014-06-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  17. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server

    International Nuclear Information System (INIS)

    Suarez, Patricia M.; Pepe, Maria E.; Sbaffoni, Maria M.

    2000-01-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  18. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  19. WEBnm@ v2.0: Web server and services for comparing protein flexibility.

    Science.gov (United States)

    Tiwari, Sandhya P; Fuglebakk, Edvin; Hollup, Siv M; Skjærven, Lars; Cragnolini, Tristan; Grindhaug, Svenn H; Tekle, Kidane M; Reuter, Nathalie

    2014-12-30

    Normal mode analysis (NMA) using elastic network models is a reliable and cost-effective computational method to characterise protein flexibility and by extension, their dynamics. Further insight into the dynamics-function relationship can be gained by comparing protein motions between protein homologs and functional classifications. This can be achieved by comparing normal modes obtained from sets of evolutionary related proteins. We have developed an automated tool for comparative NMA of a set of pre-aligned protein structures. The user can submit a sequence alignment in the FASTA format and the corresponding coordinate files in the Protein Data Bank (PDB) format. The computed normalised squared atomic fluctuations and atomic deformation energies of the submitted structures can be easily compared on graphs provided by the web user interface. The web server provides pairwise comparison of the dynamics of all proteins included in the submitted set using two measures: the Root Mean Squared Inner Product and the Bhattacharyya Coefficient. The Comparative Analysis has been implemented on our web server for NMA, WEBnm@, which also provides recently upgraded functionality for NMA of single protein structures. This includes new visualisations of protein motion, visualisation of inter-residue correlations and the analysis of conformational change using the overlap analysis. In addition, programmatic access to WEBnm@ is now available through a SOAP-based web service. Webnm@ is available at http://apps.cbu.uib.no/webnma . WEBnm@ v2.0 is an online tool offering unique capability for comparative NMA on multiple protein structures. Along with a convenient web interface, powerful computing resources, and several methods for mode analyses, WEBnm@ facilitates the assessment of protein flexibility within protein families and superfamilies. These analyses can give a good view of how the structures move and how the flexibility is conserved over the different structures.

  20. SequenceCEROSENE: a computational method and web server to visualize spatial residue neighborhoods at the sequence level.

    Science.gov (United States)

    Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk

    2016-01-01

    To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial

  1. Developing a Web Server Platform with SAPI support for AJAX RPC using JSON

    OpenAIRE

    Iulian ILIE NEMEDI

    2007-01-01

    Writing a custom web server with SAPI support is a useful task which helps students and future system architects to understand the link between network programming, object oriented programming, enterprise application designing patterns and development best practices because it offers a vision upon interprocess communication and application extensibility in a distributed environment

  2. Web Server Security on Open Source Environments

    Science.gov (United States)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  3. SeMPI: a genome-based secondary metabolite prediction and identification web server.

    Science.gov (United States)

    Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan

    2017-07-03

    The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    Energy Technology Data Exchange (ETDEWEB)

    Schot, Gijs van der [Uppsala University, Laboratory of Molecular Biophysics, Department of Cell and Molecular Biology (Sweden); Bonvin, Alexandre M. J. J., E-mail: a.m.j.j.bonvin@uu.nl [Utrecht University, Faculty of Science – Chemistry, Bijvoet Center for Biomolecular Research (Netherlands)

    2015-08-15

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665–1674, 2005b, doi: 10.1021/ja047109h 10.1021/ja047109h ). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27–35, 2013, doi: 10.1007/s10858-013-9762-6 10.1007/s10858-013-9762-6 ), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  5. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    International Nuclear Information System (INIS)

    Schot, Gijs van der; Bonvin, Alexandre M. J. J.

    2015-01-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665–1674, 2005b, doi: 10.1021/ja047109h 10.1021/ja047109h ). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27–35, 2013, doi: 10.1007/s10858-013-9762-6 10.1007/s10858-013-9762-6 ), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution

  6. COGNAT: a web server for comparative analysis of genomic neighborhoods.

    Science.gov (United States)

    Klimchuk, Olesya I; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Dibrova, Daria V; Mulkidjanian, Armen Y

    2017-11-22

    In prokaryotic genomes, functionally coupled genes can be organized in conserved gene clusters enabling their coordinated regulation. Such clusters could contain one or several operons, which are groups of co-transcribed genes. Those genes that evolved from a common ancestral gene by speciation (i.e. orthologs) are expected to have similar genomic neighborhoods in different organisms, whereas those copies of the gene that are responsible for dissimilar functions (i.e. paralogs) could be found in dissimilar genomic contexts. Comparative analysis of genomic neighborhoods facilitates the prediction of co-regulated genes and helps to discern different functions in large protein families. We intended, building on the attribution of gene sequences to the clusters of orthologous groups of proteins (COGs), to provide a method for visualization and comparative analysis of genomic neighborhoods of evolutionary related genes, as well as a respective web server. Here we introduce the COmparative Gene Neighborhoods Analysis Tool (COGNAT), a web server for comparative analysis of genomic neighborhoods. The tool is based on the COG database, as well as the Pfam protein families database. As an example, we show the utility of COGNAT in identifying a new type of membrane protein complex that is formed by paralog(s) of one of the membrane subunits of the NADH:quinone oxidoreductase of type 1 (COG1009) and a cytoplasmic protein of unknown function (COG3002). This article was reviewed by Drs. Igor Zhulin, Uri Gophna and Igor Rogozin.

  7. Developing a Web Server Platform with SAPI support for AJAX RPC using JSON

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Writing a custom web server with SAPI support is a useful task which helps students and future system architects to understand the link between network programming, object oriented programming, enterprise application designing patterns and development best practices because it offers a vision upon interprocess communication and application extensibility in a distributed environment

  8. GenProBiS: web server for mapping of sequence variants to protein binding sites.

    Science.gov (United States)

    Konc, Janez; Skrlj, Blaz; Erzen, Nika; Kunej, Tanja; Janezic, Dusanka

    2017-07-03

    Discovery of potentially deleterious sequence variants is important and has wide implications for research and generation of new hypotheses in human and veterinary medicine, and drug discovery. The GenProBiS web server maps sequence variants to protein structures from the Protein Data Bank (PDB), and further to protein-protein, protein-nucleic acid, protein-compound, and protein-metal ion binding sites. The concept of a protein-compound binding site is understood in the broadest sense, which includes glycosylation and other post-translational modification sites. Binding sites were defined by local structural comparisons of whole protein structures using the Protein Binding Sites (ProBiS) algorithm and transposition of ligands from the similar binding sites found to the query protein using the ProBiS-ligands approach with new improvements introduced in GenProBiS. Binding site surfaces were generated as three-dimensional grids encompassing the space occupied by predicted ligands. The server allows intuitive visual exploration of comprehensively mapped variants, such as human somatic mis-sense mutations related to cancer and non-synonymous single nucleotide polymorphisms from 21 species, within the predicted binding sites regions for about 80 000 PDB protein structures using fast WebGL graphics. The GenProBiS web server is open and free to all users at http://genprobis.insilab.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Analysis of Web Server Log Files: Website of Information Management Department of Hacettepe University

    Directory of Open Access Journals (Sweden)

    Mandana Mir Moftakhari

    2015-09-01

    Full Text Available Over the last decade, the importance of analysing information management systems logs has grown, because it has proved that results of the analysing log data can help developing in information system design, interface and architecture of websites. Log file analysis is one of the best ways in order to understand information-searching process of online searchers, users’ needs, interests, knowledge, and prejudices. The utilization of data collected in transaction logs of web search engines helps designers, researchers and web site managers to find complex interactions of users’ goals and behaviours to increase efficiency and effectiveness of websites. Before starting any analysis it should be observed that the log file of the web site contain enough information, otherwise analyser wouldn’t be able to create complete report. In this study we evaluate the website of Information Management Department of Hacettepe University by analysing the server log files. Results show that there is not adequate amount of information in log files which are provided by web site server. The reports which we have created have some information about users’ behaviour and need but they are not sufficient for taking ideal decisions about contents & hyperlink structure of website. It also provides that creating an extended log file is essential for the website. Finally we believe that results can be helpful to improve, redesign and create better website.

  11. EnviroAtlas - Housing in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate the number and density of housing units. Housing density and the proximity of housing to employment can...

  12. EnviroAtlas - Accessibility Characteristics in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate factors affecting transit accessibility, and indicators of accessibility. Accessibility measures how...

  13. The RING 2.0 web server for high quality residue interaction networks.

    Science.gov (United States)

    Piovesan, Damiano; Minervini, Giovanni; Tosatto, Silvio C E

    2016-07-08

    Residue interaction networks (RINs) are an alternative way of representing protein structures where nodes are residues and arcs physico-chemical interactions. RINs have been extensively and successfully used for analysing mutation effects, protein folding, domain-domain communication and catalytic activity. Here we present RING 2.0, a new version of the RING software for the identification of covalent and non-covalent bonds in protein structures, including π-π stacking and π-cation interactions. RING 2.0 is extremely fast and generates both intra and inter-chain interactions including solvent and ligand atoms. The generated networks are very accurate and reliable thanks to a complex empirical re-parameterization of distance thresholds performed on the entire Protein Data Bank. By default, RING output is generated with optimal parameters but the web server provides an exhaustive interface to customize the calculation. The network can be visualized directly in the browser or in Cytoscape. Alternatively, the RING-Viz script for Pymol allows visualizing the interactions at atomic level in the structure. The web server and RING-Viz, together with an extensive help and tutorial, are available from URL: http://protein.bio.unipd.it/ring. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  15. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  16. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    Science.gov (United States)

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-06-20

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.

  17. PROTOTIPE PEMESANAN BAHAN PUSTAKA MELALUI WEB MENGGUNAKAN ACTIVE SERVER PAGE (ASP

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2002-01-01

    Full Text Available Electronic commerce is one of the components in the internet that growing fast in the world. In this research, it is developed the prototype for library service that offers library collection ordering especially books and articles through World Wide Web. In order to get an interaction between seller and buyer, there is an urgency to develop a dynamic web, which needs the technology and software. One of the programming languages is called Active Server Pages (ASP and it is combining with database system to store data. The other component as an interface between application and database is ActiveX Data Objects (ADO. ASP has an advantage in the scripting method and it is easy to make the configuration with database. This application consists of two major parts those are administrator and user. This prototype has the facilities for editing, searching and looking through ordering information online. Users can also do downloading process for searching and ordering articles. Paying method in this e-commerce system is quite essential because in Indonesia not everybody has a credit card. As a solution to this situation, this prototype has a form for user who does not have credit card. If the bill has been paid, he can do the transaction online. In this case, one of the ASP advantages will be used. This is called "session" when data in process would not be lost as long as the user still in that "session". This will be used in user area and admin area where the users and the admin can do various processes. Abstract in Bahasa Indonesia : Electronic commerce adalah satu bagian dari internet yang berkembang pesat di dunia saat ini. Pada penelitian ini dibuat suatu prototipe program aplikasi untuk pengembangan jasa layanan perpustakaan khususnya pemesanan artikel dan buku melalui World Wide Web. Untuk membangun aplikasi berbasis web diperlukan teknologi dan software yang mendukung pembuatan situs web dinamis sehingga ada interaksi antara pembeli dan penjual

  18. New Web Server - the Java Version of Tempest - Produced

    Science.gov (United States)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  19. The digital anatomist information system and its use in the generation and delivery of Web-based anatomy atlases.

    Science.gov (United States)

    Brinkley, J F; Bradley, S W; Sundsten, J W; Rosse, C

    1997-12-01

    Advances in network and imaging technology, coupled with the availability of 3-D datasets such as the Visible Human, provide a unique opportunity for developing information systems in anatomy that can deliver relevant knowledge directly to the clinician, researcher or educator. A software framework is described for developing such a system within a distributed architecture that includes spatial and symbolic anatomy information resources, Web and custom servers, and authoring and end-user client programs. The authoring tools have been used to create 3-D atlases of the brain, knee and thorax that are used both locally and throughout the world. For the one and a half year period from June 1995-January 1997, the on-line atlases were accessed by over 33,000 sites from 94 countries, with an average of over 4000 "hits" per day, and 25,000 hits per day during peak exam periods. The atlases have been linked to by over 500 sites, and have received at least six unsolicited awards by outside rating institutions. The flexibility of the software framework has allowed the information system to evolve with advances in technology and representation methods. Possible new features include knowledge-based image retrieval and tutoring, dynamic generation of 3-D scenes, and eventually, real-time virtual reality navigation through the body. Such features, when coupled with other on-line biomedical information resources, should lead to interesting new ways for managing and accessing structural information in medicine. Copyright 1997 Academic Press.

  20. GlobAl Distribution of GEnetic Traits (GADGET) web server: polygenic trait scores worldwide.

    Science.gov (United States)

    Chande, Aroon T; Wang, Lu; Rishishwar, Lavanya; Conley, Andrew B; Norris, Emily T; Valderrama-Aguirre, Augusto; Jordan, I King

    2018-05-18

    Human populations from around the world show striking phenotypic variation across a wide variety of traits. Genome-wide association studies (GWAS) are used to uncover genetic variants that influence the expression of heritable human traits; accordingly, population-specific distributions of GWAS-implicated variants may shed light on the genetic basis of human phenotypic diversity. With this in mind, we developed the GlobAl Distribution of GEnetic Traits web server (GADGET http://gadget.biosci.gatech.edu). The GADGET web server provides users with a dynamic visual platform for exploring the relationship between worldwide genetic diversity and the genetic architecture underlying numerous human phenotypes. GADGET integrates trait-implicated single nucleotide polymorphisms (SNPs) from GWAS, with population genetic data from the 1000 Genomes Project, to calculate genome-wide polygenic trait scores (PTS) for 818 phenotypes in 2504 individual genomes. Population-specific distributions of PTS are shown for 26 human populations across 5 continental population groups, with traits ordered based on the extent of variation observed among populations. Users of GADGET can also upload custom trait SNP sets to visualize global PTS distributions for their own traits of interest.

  1. EnviroAtlas - Employment Activity in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate job activity in each census block group. Employment diversity, employment density, and proximity of...

  2. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    Science.gov (United States)

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  3. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    Science.gov (United States)

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-12-27

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  4. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    Directory of Open Access Journals (Sweden)

    Eduardo Magdaleno

    2013-12-01

    Full Text Available This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI. The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A. Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  5. Chemotext: A Publicly Available Web Server for Mining Drug-Target-Disease Relationships in PubMed.

    Science.gov (United States)

    Capuzzi, Stephen J; Thornton, Thomas E; Liu, Kammy; Baker, Nancy; Lam, Wai In; O'Banion, Colin P; Muratov, Eugene N; Pozefsky, Diane; Tropsha, Alexander

    2018-02-26

    Elucidation of the mechanistic relationships between drugs, their targets, and diseases is at the core of modern drug discovery research. Thousands of studies relevant to the drug-target-disease (DTD) triangle have been published and annotated in the Medline/PubMed database. Mining this database affords rapid identification of all published studies that confirm connections between vertices of this triangle or enable new inferences of such connections. To this end, we describe the development of Chemotext, a publicly available Web server that mines the entire compendium of published literature in PubMed annotated by Medline Subject Heading (MeSH) terms. The goal of Chemotext is to identify all known DTD relationships and infer missing links between vertices of the DTD triangle. As a proof-of-concept, we show that Chemotext could be instrumental in generating new drug repurposing hypotheses or annotating clinical outcomes pathways for known drugs. The Chemotext Web server is freely available at http://chemotext.mml.unc.edu .

  6. Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems

    Science.gov (United States)

    Demir, I.; Sermet, M. Y.; Sit, M. A.

    2016-12-01

    Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.

  7. The TOPCONS web server for consensus prediction of membrane protein topology and signal peptides.

    Science.gov (United States)

    Tsirigos, Konstantinos D; Peters, Christoph; Shu, Nanjiang; Käll, Lukas; Elofsson, Arne

    2015-07-01

    TOPCONS (http://topcons.net/) is a widely used web server for consensus prediction of membrane protein topology. We hereby present a major update to the server, with some substantial improvements, including the following: (i) TOPCONS can now efficiently separate signal peptides from transmembrane regions. (ii) The server can now differentiate more successfully between globular and membrane proteins. (iii) The server now is even slightly faster, although a much larger database is used to generate the multiple sequence alignments. For most proteins, the final prediction is produced in a matter of seconds. (iv) The user-friendly interface is retained, with the additional feature of submitting batch files and accessing the server programmatically using standard interfaces, making it thus ideal for proteome-wide analyses. Indicatively, the user can now scan the entire human proteome in a few days. (v) For proteins with homology to a known 3D structure, the homology-inferred topology is also displayed. (vi) Finally, the combination of methods currently implemented achieves an overall increase in performance by 4% as compared to the currently available best-scoring methods and TOPCONS is the only method that can identify signal peptides and still maintain a state-of-the-art performance in topology predictions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    Directory of Open Access Journals (Sweden)

    Asa Thibodeau

    2016-06-01

    Full Text Available Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1 building and visualizing chromatin interaction networks, 2 annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3 querying network components based on gene name or chromosome location, and 4 utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions.QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  9. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    Science.gov (United States)

    Thibodeau, Asa; Márquez, Eladio J; Luo, Oscar; Ruan, Yijun; Menghi, Francesca; Shin, Dong-Guk; Stitzel, Michael L; Vera-Licona, Paola; Ucar, Duygu

    2016-06-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  10. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  11. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  12. The UMLS Knowledge Source Server: an experience in Web 2.0 technologies.

    Science.gov (United States)

    Thorn, Karen E; Bangalore, Anantha K; Browne, Allen C

    2007-10-11

    The UMLS Knowledge Source Server (UMLSKS), developed at the National Library of Medicine (NLM), makes the knowledge sources of the Unified Medical Language System (UMLS) available to the research community over the Internet. In 2003, the UMLSKS was redesigned utilizing state-of-the-art technologies available at that time. That design offered a significant improvement over the prior version but presented a set of technology-dependent issues that limited its functionality and usability. Four areas of desired improvement were identified: software interfaces, web interface content, system maintenance/deployment, and user authentication. By employing next generation web technologies, newer authentication paradigms and further refinements in modular design methods, these areas could be addressed and corrected to meet the ever increasing needs of UMLSKS developers. In this paper we detail the issues present with the existing system and describe the new system's design using new technologies considered entrants in the Web 2.0 development era.

  13. PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition.

    Science.gov (United States)

    Zuo, Yongchun; Li, Yuan; Chen, Yingli; Li, Guangpeng; Yan, Zhenhe; Yang, Lei

    2017-01-01

    The reduced amino acids perform powerful ability for both simplifying protein complexity and identifying functional conserved regions. However, dealing with different protein problems may need different kinds of cluster methods. Encouraged by the success of pseudo-amino acid composition algorithm, we developed a freely available web server, called PseKRAAC (the pseudo K-tuple reduced amino acids composition). By implementing reduced amino acid alphabets, the protein complexity can be significantly simplified, which leads to decrease chance of overfitting, lower computational handicap and reduce information redundancy. PseKRAAC delivers more capability for protein research by incorporating three crucial parameters that describes protein composition. Users can easily generate many different modes of PseKRAAC tailored to their needs by selecting various reduced amino acids alphabets and other characteristic parameters. It is anticipated that the PseKRAAC web server will become a very useful tool in computational proteomics and protein sequence analysis. Freely available on the web at http://bigdata.imu.edu.cn/psekraac CONTACTS: yczuo@imu.edu.cn or imu.hema@foxmail.com or yanglei_hmu@163.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. CID-miRNA: A web server for prediction of novel miRNA precursors in human genome

    International Nuclear Information System (INIS)

    Tyagi, Sonika; Vaz, Candida; Gupta, Vipin; Bhatia, Rohit; Maheshwari, Sachin; Srinivasan, Ashwin; Bhattacharya, Alok

    2008-01-01

    microRNAs (miRNA) are a class of non-protein coding functional RNAs that are thought to regulate expression of target genes by direct interaction with mRNAs. miRNAs have been identified through both experimental and computational methods in a variety of eukaryotic organisms. Though these approaches have been partially successful, there is a need to develop more tools for detection of these RNAs as they are also thought to be present in abundance in many genomes. In this report we describe a tool and a web server, named CID-miRNA, for identification of miRNA precursors in a given DNA sequence, utilising secondary structure-based filtering systems and an algorithm based on stochastic context free grammar trained on human miRNAs. CID-miRNA analyses a given sequence using a web interface, for presence of putative miRNA precursors and the generated output lists all the potential regions that can form miRNA-like structures. It can also scan large genomic sequences for the presence of potential miRNA precursors in its stand-alone form. The web server can be accessed at (http://mirna.jnu.ac.in/cidmirna/)

  15. RNAmutants: a web server to explore the mutational landscape of RNA secondary structures

    Science.gov (United States)

    Waldispühl, Jerome; Devadas, Srinivas; Berger, Bonnie; Clote, Peter

    2009-01-01

    The history and mechanism of molecular evolution in DNA have been greatly elucidated by contributions from genetics, probability theory and bioinformatics—indeed, mathematical developments such as Kimura's neutral theory, Kingman's coalescent theory and efficient software such as BLAST, ClustalW, Phylip, etc., provide the foundation for modern population genetics. In contrast to DNA, the function of most noncoding RNA depends on tertiary structure, experimentally known to be largely determined by secondary structure, for which dynamic programming can efficiently compute the minimum free energy secondary structure. For this reason, understanding the effect of pointwise mutations in RNA secondary structure could reveal fundamental properties of structural RNA molecules and improve our understanding of molecular evolution of RNA. The web server RNAmutants provides several efficient tools to compute the ensemble of low-energy secondary structures for all k-mutants of a given RNA sequence, where k is bounded by a user-specified upper bound. As we have previously shown, these tools can be used to predict putative deleterious mutations and to analyze regulatory sequences from the hepatitis C and human immunodeficiency genomes. Web server is available at http://bioinformatics.bc.edu/clotelab/RNAmutants/, and downloadable binaries at http://rnamutants.csail.mit.edu/. PMID:19531740

  16. incaRNAfbinv: a web server for the fragment-based design of RNA sequences

    Science.gov (United States)

    Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny

    2016-01-01

    Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893

  17. The SMARTCyp cytochrome P450 metabolism prediction server

    DEFF Research Database (Denmark)

    Rydberg, Patrik; Gloriam, David Erik Immanuel; Olsen, Lars

    2010-01-01

    The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism.......The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism....

  18. MotifNet: a web-server for network motif analysis.

    Science.gov (United States)

    Smoly, Ilan Y; Lerman, Eugene; Ziv-Ukelson, Michal; Yeger-Lotem, Esti

    2017-06-15

    Network motifs are small topological patterns that recur in a network significantly more often than expected by chance. Their identification emerged as a powerful approach for uncovering the design principles underlying complex networks. However, available tools for network motif analysis typically require download and execution of computationally intensive software on a local computer. We present MotifNet, the first open-access web-server for network motif analysis. MotifNet allows researchers to analyze integrated networks, where nodes and edges may be labeled, and to search for motifs of up to eight nodes. The output motifs are presented graphically and the user can interactively filter them by their significance, number of instances, node and edge labels, and node identities, and view their instances. MotifNet also allows the user to distinguish between motifs that are centered on specific nodes and motifs that recur in distinct parts of the network. MotifNet is freely available at http://netbio.bgu.ac.il/motifnet . The website was implemented using ReactJs and supports all major browsers. The server interface was implemented in Python with data stored on a MySQL database. estiyl@bgu.ac.il or michaluz@cs.bgu.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore

    2004-07-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  20. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at th...

  1. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2010-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using the SCALA digital signage software system. The system is robust and flexible, allowing for the usage of scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intrascreen divisibility. The video is made available to the collaboration or public through the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video t...

  2. SSDL personel dosimetry system: migration from a client - server system into a web-based system

    International Nuclear Information System (INIS)

    Maizura Ibrahim; Rosnah Shariff; Ahmad Bazlie Abdul Kadir; John Konsoh Sangau; Mohd Amin Sharifuldin Salleh; Taiman Kadni; Noriah Mod Ali

    2007-01-01

    Personnel Dosimetry System has been used by the Secondary Standard Dosimetry Laboratory (SSDL), Nuclear Malaysia since ten years ago. The system is a computerized database system with a client-server concept. This system has been used by Film Badge Laboratory, SSDL to record details of clients, calculation of Film Badge dosage, management of radiation workers data's, generating of dosage report, retrieval of statistical reports regarding film badge usage for the purpose of reporting to monitoring bodies such as Atomic Energy Licensing Board (AELB), Ministry of Health and others. But, due to technical problems that frequently occurs, the system is going to be replaced by a newly developed web- based system called e-SSDL. This paper describe the problems that regularly occurs in the previous system, explains how the process of replacing the client-server system with a web-based system is done and the differences between the previous and current system. This paper will also present details architecture of the new system and the new process introduced in processing film badges. (Author)

  3. VizPrimer: a web server for visualized PCR primer design based on known gene structure.

    Science.gov (United States)

    Zhou, Yang; Qu, Wubin; Lu, Yiming; Zhang, Yanchun; Wang, Xiaolei; Zhao, Dongsheng; Yang, Yi; Zhang, Chenggang

    2011-12-15

    The visualization of gene structure plays an important role in polymerase chain reaction (PCR) primer design, especially for eukaryotic genes with a number of splice variants that users need to distinguish between via PCR. Here, we describe a visualized web server for primer design named VizPrimer. It utilizes the new information technology (IT) tools, HTML5 to display gene structure and JavaScript to interact with the users. In VizPrimer, the users can focus their attention on the gene structure and primer design strategy, without wasting time calculating the exon positions of splice variants or manually configuring complicated parameters. In addition, VizPrimer is also suitable for the design of PCR primers for amplifying open reading frames and detecting single nucleotide polymorphisms (SNPs). VizPrimer is freely available at http://biocompute.bmi.ac.cn/CZlab/VizPrimer/. The web server supported browsers: Chrome (≥5.0), Firefox (≥3.0), Safari (≥4.0) and Opera (≥10.0). zhangcg@bmi.ac.cn; yangyi528@vip.sina.com.

  4. RStrucFam: a web server to associate structure and cognate RNA for RNA-binding proteins from sequence information.

    Science.gov (United States)

    Ghosh, Pritha; Mathew, Oommen K; Sowdhamini, Ramanathan

    2016-10-07

    RNA-binding proteins (RBPs) interact with their cognate RNA(s) to form large biomolecular assemblies. They are versatile in their functionality and are involved in a myriad of processes inside the cell. RBPs with similar structural features and common biological functions are grouped together into families and superfamilies. It will be useful to obtain an early understanding and association of RNA-binding property of sequences of gene products. Here, we report a web server, RStrucFam, to predict the structure, type of cognate RNA(s) and function(s) of proteins, where possible, from mere sequence information. The web server employs Hidden Markov Model scan (hmmscan) to enable association to a back-end database of structural and sequence families. The database (HMMRBP) comprises of 437 HMMs of RBP families of known structure that have been generated using structure-based sequence alignments and 746 sequence-centric RBP family HMMs. The input protein sequence is associated with structural or sequence domain families, if structure or sequence signatures exist. In case of association of the protein with a family of known structures, output features like, multiple structure-based sequence alignment (MSSA) of the query with all others members of that family is provided. Further, cognate RNA partner(s) for that protein, Gene Ontology (GO) annotations, if any and a homology model of the protein can be obtained. The users can also browse through the database for details pertaining to each family, protein or RNA and their related information based on keyword search or RNA motif search. RStrucFam is a web server that exploits structurally conserved features of RBPs, derived from known family members and imprinted in mathematical profiles, to predict putative RBPs from sequence information. Proteins that fail to associate with such structure-centric families are further queried against the sequence-centric RBP family HMMs in the HMMRBP database. Further, all other essential

  5. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Science.gov (United States)

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Getting to the Source: a Survey of Quantitative Data Sources Available to the Everyday Librarian: Part 1: Web Server Log Analysis

    Directory of Open Access Journals (Sweden)

    Lisa Goddard

    2007-03-01

    Full Text Available This is the first part of a two‐part article that provides a survey of data sources which are likely to be immediately available to the typical practitioner who wishes to engage instatistical analysis of collections and services within his or her own library. Part I outlines the data elements which can be extracted from web server logs, and discusses web log analysis tools. Part II looks at logs, reports, and data sources from proxy servers, resource vendors, link resolvers, federated search engines, institutional repositories, electronic reference services, and the integrated library system.

  7. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  8. AthMethPre: a web server for the prediction and query of mRNA m6A sites in Arabidopsis thaliana.

    Science.gov (United States)

    Xiang, Shunian; Yan, Zhangming; Liu, Ke; Zhang, Yaou; Sun, Zhirong

    2016-10-18

    N 6 -Methyladenosine (m 6 A) is the most prevalent and abundant modification in mRNA that has been linked to many key biological processes. High-throughput experiments have generated m 6 A-peaks across the transcriptome of A. thaliana, but the specific methylated sites were not assigned, which impedes the understanding of m 6 A functions in plants. Therefore, computational prediction of mRNA m 6 A sites becomes emergently important. Here, we present a method to predict the m 6 A sites for A. thaliana mRNA sequence(s). To predict the m 6 A sites of an mRNA sequence, we employed the support vector machine to build a classifier using the features of the positional flanking nucleotide sequence and position-independent k-mer nucleotide spectrum. Our method achieved good performance and was applied to a web server to provide service for the prediction of A. thaliana m 6 A sites. The server also provides a comprehensive database of predicted transcriptome-wide m 6 A sites and curated m 6 A-seq peaks from the literature for query and visualization. The AthMethPre web server is the first web server that provides a user-friendly tool for the prediction and query of A. thaliana mRNA m 6 A sites, which is freely accessible for public use at .

  9. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  10. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  11. From the CERN web: LHCb, ATLAS, ILC and more

    CERN Multimedia

    2015-01-01

    This new section highlights articles, blog posts and press releases published in the CERN web environment over the past weeks. This way, you won’t miss a thing...     LHCb sees small deviations from the lepton universality 1 September – LHCb collaboration The LHCb experiment at CERN has made the first measurement at a hadron collider of B meson decays that have already shown small deviations from the predictions of the Standard Model in earlier studies at an electron-positron collider. Continue to read…     The figure shows the density of allowed supersymmetric models before and after the ATLAS Run 1 searches. The missing points have been ruled out by the LHC data. The x-axis shows the mass of the supersymmetric dark matter particle, while the y-axis shows the predicted density of those particles in the universe.     ATLAS is narrowing down the theoretical candidates for dark matter 25 August – ATLAS collab...

  12. Mergeomics: a web server for identifying pathological pathways, networks, and key regulators via multidimensional data integration.

    Science.gov (United States)

    Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia

    2016-09-09

    Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators

  13. Web servers and services for electrostatics calculations with APBS and PDB2PQR

    Science.gov (United States)

    Unni, Samir; Huang, Yong; Hanson, Robert; Tobias, Malcolm; Krishnan, Sriram; Li, Wilfred W.; Nielsen, Jens E.; Baker, Nathan A.

    2011-01-01

    APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a Web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in current distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/. PMID:21425296

  14. RegRNA: an integrated web server for identifying regulatory RNA motifs and elements

    OpenAIRE

    Huang, Hsi-Yuan; Chien, Chia-Hung; Jen, Kuan-Hua; Huang, Hsien-Da

    2006-01-01

    Numerous regulatory structural motifs have been identified as playing essential roles in transcriptional and post-transcriptional regulation of gene expression. RegRNA is an integrated web server for identifying the homologs of regulatory RNA motifs and elements against an input mRNA sequence. Both sequence homologs and structural homologs of regulatory RNA motifs can be recognized. The regulatory RNA motifs supported in RegRNA are categorized into several classes: (i) motifs in mRNA 5′-untra...

  15. COGcollator: a web server for analysis of distant relationships between homologous protein families.

    Science.gov (United States)

    Dibrova, Daria V; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Mulkidjanian, Armen Y

    2017-11-29

    The Clusters of Orthologous Groups (COGs) of proteins systematize evolutionary related proteins into specific groups with similar functions. However, the available databases do not provide means to assess the extent of similarity between the COGs. We intended to provide a method for identification and visualization of evolutionary relationships between the COGs, as well as a respective web server. Here we introduce the COGcollator, a web tool for identification of evolutionarily related COGs and their further analysis. We demonstrate the utility of this tool by identifying the COGs that contain distant homologs of (i) the catalytic subunit of bacterial rotary membrane ATP synthases and (ii) the DNA/RNA helicases of the superfamily 1. This article was reviewed by Drs. Igor N. Berezovsky, Igor Zhulin and Yuri Wolf.

  16. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2015-01-01

    We explore the potentialities of current web applications to create online interfaces that allow the visualization, interaction and real physics cut-based analysis and monitoring of processes trough a web browser. The project consists in the initial development of web-based and cloud computing services to allow students and researches to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte-Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based $H \\rightarrow ZZ \\rightarrow llqq$ analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.

  17. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Pineda, A S

    2015-01-01

    We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.

  18. LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays

    Directory of Open Access Journals (Sweden)

    Gao Hongmei

    2011-05-01

    Full Text Available Abstract Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the Lab

  19. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  20. iTAR: a web server for identifying target genes of transcription factors using ChIP-seq or ChIP-chip data.

    Science.gov (United States)

    Yang, Chia-Chun; Andrews, Erik H; Chen, Min-Hsuan; Wang, Wan-Yu; Chen, Jeremy J W; Gerstein, Mark; Liu, Chun-Chi; Cheng, Chao

    2016-08-12

    Chromatin immunoprecipitation followed by massively parallel DNA sequencing (ChIP-seq) or microarray hybridization (ChIP-chip) has been widely used to determine the genomic occupation of transcription factors (TFs). We have previously developed a probabilistic method, called TIP (Target Identification from Profiles), to identify TF target genes using ChIP-seq/ChIP-chip data. To achieve high specificity, TIP applies a conservative method to estimate significance of target genes, with the trade-off being a relatively low sensitivity of target gene identification compared to other methods. Additionally, TIP's output does not render binding-peak locations or intensity, information highly useful for visualization and general experimental biological use, while the variability of ChIP-seq/ChIP-chip file formats has made input into TIP more difficult than desired. To improve upon these facets, here we present are fined TIP with key extensions. First, it implements a Gaussian mixture model for p-value estimation, increasing target gene identification sensitivity and more accurately capturing the shape of TF binding profile distributions. Second, it enables the incorporation of TF binding-peak data by identifying their locations in significant target gene promoter regions and quantifies their strengths. Finally, for full ease of implementation we have incorporated it into a web server ( http://syslab3.nchu.edu.tw/iTAR/ ) that enables flexibility of input file format, can be used across multiple species and genome assembly versions, and is freely available for public use. The web server additionally performs GO enrichment analysis for the identified target genes to reveal the potential function of the corresponding TF. The iTAR web server provides a user-friendly interface and supports target gene identification in seven species, ranging from yeast to human. To facilitate investigating the quality of ChIP-seq/ChIP-chip data, the web server generates the chart of the

  1. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  2. 3dRPC: a web server for 3D RNA-protein structure prediction.

    Science.gov (United States)

    Huang, Yangyu; Li, Haotian; Xiao, Yi

    2018-04-01

    RNA-protein interactions occur in many biological processes. To understand the mechanism of these interactions one needs to know three-dimensional (3D) structures of RNA-protein complexes. 3dRPC is an algorithm for prediction of 3D RNA-protein complex structures and consists of a docking algorithm RPDOCK and a scoring function 3dRPC-Score. RPDOCK is used to sample possible complex conformations of an RNA and a protein by calculating the geometric and electrostatic complementarities and stacking interactions at the RNA-protein interface according to the features of atom packing of the interface. 3dRPC-Score is a knowledge-based potential that uses the conformations of nucleotide-amino-acid pairs as statistical variables and that is used to choose the near-native complex-conformations obtained from the docking method above. Recently, we built a web server for 3dRPC. The users can easily use 3dRPC without installing it locally. RNA and protein structures in PDB (Protein Data Bank) format are the only needed input files. It can also incorporate the information of interface residues or residue-pairs obtained from experiments or theoretical predictions to improve the prediction. The address of 3dRPC web server is http://biophy.hust.edu.cn/3dRPC. yxiao@hust.edu.cn.

  3. Tank Information System (tis): a Case Study in Migrating Web Mapping Application from Flex to Dojo for Arcgis Server and then to Open Source

    Science.gov (United States)

    Pulsani, B. R.

    2017-11-01

    Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  4. MetaRanker 2.0: a web server for prioritization of genetic variation data.

    Science.gov (United States)

    Pers, Tune H; Dworzyński, Piotr; Thomas, Cecilia Engel; Lage, Kasper; Brunak, Søren

    2013-07-01

    MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein-protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, MetaRanker 2.0 prioritizes the protein-coding part of the human genome to shortlist candidate genes for targeted follow-up studies. MetaRanker 2.0 is made freely available at www.cbs.dtu.dk/services/MetaRanker-2.0.

  5. PRince: a web server for structural and physicochemical analysis of protein-RNA interface.

    Science.gov (United States)

    Barik, Amita; Mishra, Abhishek; Bahadur, Ranjit Prasad

    2012-07-01

    We have developed a web server, PRince, which analyzes the structural features and physicochemical properties of the protein-RNA interface. Users need to submit a PDB file containing the atomic coordinates of both the protein and the RNA molecules in complex form (in '.pdb' format). They should also mention the chain identifiers of interacting protein and RNA molecules. The size of the protein-RNA interface is estimated by measuring the solvent accessible surface area buried in contact. For a given protein-RNA complex, PRince calculates structural, physicochemical and hydration properties of the interacting surfaces. All these parameters generated by the server are presented in a tabular format. The interacting surfaces can also be visualized with software plug-in like Jmol. In addition, the output files containing the list of the atomic coordinates of the interacting protein, RNA and interface water molecules can be downloaded. The parameters generated by PRince are novel, and users can correlate them with the experimentally determined biophysical and biochemical parameters for better understanding the specificity of the protein-RNA recognition process. This server will be continuously upgraded to include more parameters. PRince is publicly accessible and free for use. Available at http://www.facweb.iitkgp.ernet.in/~rbahadur/prince/home.html.

  6. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    Science.gov (United States)

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  7. World wide web implementation of the Langley technical report server

    Science.gov (United States)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.

    1994-01-01

    On January 14, 1993, NASA Langley Research Center (LaRC) made approximately 130 formal, 'unclassified, unlimited' technical reports available via the anonymous FTP Langley Technical Report Server (LTRS). LaRC was the first organization to provide a significant number of aerospace technical reports for open electronic dissemination. LTRS has been successful in its first 18 months of operation, with over 11,000 reports distributed and has helped lay the foundation for electronic document distribution for NASA. The availability of World Wide Web (WWW) technology has revolutionized the Internet-based information community. This paper describes the transition of LTRS from a centralized FTP site to a distributed data model using the WWW, and suggests how the general model for LTRS can be applied to other similar systems.

  8. Using the Textpresso Site-Specific Recombinases Web server to identify Cre expressing mouse strains and floxed alleles.

    Science.gov (United States)

    Condie, Brian G; Urbanski, William M

    2014-01-01

    Effective tools for searching the biomedical literature are essential for identifying reagents or mouse strains as well as for effective experimental design and informed interpretation of experimental results. We have built the Textpresso Site Specific Recombinases (Textpresso SSR) Web server to enable researchers who use mice to perform in-depth searches of a rapidly growing and complex part of the mouse literature. Our Textpresso Web server provides an interface for searching the full text of most of the peer-reviewed publications that report the characterization or use of mouse strains that express Cre or Flp recombinase. The database also contains most of the publications that describe the characterization or analysis of strains carrying conditional alleles or transgenes that can be inactivated or activated by site-specific recombinases such as Cre or Flp. Textpresso SSR complements the existing online databases that catalog Cre and Flp expression patterns by providing a unique online interface for the in-depth text mining of the site specific recombinase literature.

  9. SA-Mot: a web server for the identification of motifs of interest extracted from protein loops.

    Science.gov (United States)

    Regad, Leslie; Saladin, Adrien; Maupetit, Julien; Geneix, Colette; Camproux, Anne-Claude

    2011-07-01

    The detection of functional motifs is an important step for the determination of protein functions. We present here a new web server SA-Mot (Structural Alphabet Motif) for the extraction and location of structural motifs of interest from protein loops. Contrary to other methods, SA-Mot does not focus only on functional motifs, but it extracts recurrent and conserved structural motifs involved in structural redundancy of loops. SA-Mot uses the structural word notion to extract all structural motifs from uni-dimensional sequences corresponding to loop structures. Then, SA-Mot provides a description of these structural motifs using statistics computed in the loop data set and in SCOP superfamily, sequence and structural parameters. SA-Mot results correspond to an interactive table listing all structural motifs extracted from a target structure and their associated descriptors. Using this information, the users can easily locate loop regions that are important for the protein folding and function. The SA-Mot web server is available at http://sa-mot.mti.univ-paris-diderot.fr.

  10. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2015-01-01

    We explore the potentialities of current web applications to create online interfaces that allow the visualization, interaction and real physics cut-based analysis and monitoring of processes trough a web browser. The project consists in the initial development of web-based and cloud computing services to allow students and researches to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte-Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H->ZZ->llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online; this presentation describes the tests and plans and future upgrades.

  11. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  12. IRESPred: Web Server for Prediction of Cellular and Viral Internal Ribosome Entry Site (IRES)

    Science.gov (United States)

    Kolekar, Pandurang; Pataskar, Abhijeet; Kulkarni-Kale, Urmila; Pal, Jayanta; Kulkarni, Abhijeet

    2016-01-01

    Cellular mRNAs are predominantly translated in a cap-dependent manner. However, some viral and a subset of cellular mRNAs initiate their translation in a cap-independent manner. This requires presence of a structured RNA element, known as, Internal Ribosome Entry Site (IRES) in their 5′ untranslated regions (UTRs). Experimental demonstration of IRES in UTR remains a challenging task. Computational prediction of IRES merely based on sequence and structure conservation is also difficult, particularly for cellular IRES. A web server, IRESPred is developed for prediction of both viral and cellular IRES using Support Vector Machine (SVM). The predictive model was built using 35 features that are based on sequence and structural properties of UTRs and the probabilities of interactions between UTR and small subunit ribosomal proteins (SSRPs). The model was found to have 75.51% accuracy, 75.75% sensitivity, 75.25% specificity, 75.75% precision and Matthews Correlation Coefficient (MCC) of 0.51 in blind testing. IRESPred was found to perform better than the only available viral IRES prediction server, VIPS. The IRESPred server is freely available at http://bioinfo.net.in/IRESPred/. PMID:27264539

  13. ATLAS software stack on ARM64

    CERN Document Server

    Smith, Joshua Wyatt; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment explores new hardware and software platforms that, in the future, may be more suited to its data intensive workloads. One such alternative hardware platform is the ARM architecture, which is designed to be extremely power efficient and is found in most smartphones and tablets. CERN openlab recently installed a small cluster of ARM 64-bit evaluation prototype servers. Each server is based on a single-socket ARM 64-bit system on a chip, with 32 Cortex-A57 cores. In total, each server has 128 GB RAM connected with four fast memory channels. This paper reports on the port of the ATLAS software stack onto these new prototype ARM64 servers. This included building the "external" packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adj...

  14. pocketZebra: a web-server for automated selection and classification of subfamily-specific binding sites by bioinformatic analysis of diverse protein families.

    Science.gov (United States)

    Suplatov, Dmitry; Kirilin, Eugeny; Arbatsky, Mikhail; Takhaveev, Vakil; Svedas, Vytas

    2014-07-01

    The new web-server pocketZebra implements the power of bioinformatics and geometry-based structural approaches to identify and rank subfamily-specific binding sites in proteins by functional significance, and select particular positions in the structure that determine selective accommodation of ligands. A new scoring function has been developed to annotate binding sites by the presence of the subfamily-specific positions in diverse protein families. pocketZebra web-server has multiple input modes to meet the needs of users with different experience in bioinformatics. The server provides on-site visualization of the results as well as off-line version of the output in annotated text format and as PyMol sessions ready for structural analysis. pocketZebra can be used to study structure-function relationship and regulation in large protein superfamilies, classify functionally important binding sites and annotate proteins with unknown function. The server can be used to engineer ligand-binding sites and allosteric regulation of enzymes, or implemented in a drug discovery process to search for potential molecular targets and novel selective inhibitors/effectors. The server, documentation and examples are freely available at http://biokinet.belozersky.msu.ru/pocketzebra and there are no login requirements. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment

    International Nuclear Information System (INIS)

    Lange, Bruno; Maidantchik, Carmen; Pavani, Varlen; Arosa, Breno; Abreu, Igor; Pommes, Kathy

    2015-01-01

    The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead. (paper)

  16. An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment

    Science.gov (United States)

    Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor

    2015-12-01

    The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.

  17. EnviroAtlas - Population and Residential Activity in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate population and residential activity in each census block group as well as residential-location-based...

  18. ORCAN-a web-based meta-server for real-time detection and functional annotation of orthologs.

    Science.gov (United States)

    Zielezinski, Andrzej; Dziubek, Michal; Sliski, Jan; Karlowski, Wojciech M

    2017-04-15

    ORCAN (ORtholog sCANner) is a web-based meta-server for one-click evolutionary and functional annotation of protein sequences. The server combines information from the most popular orthology-prediction resources, including four tools and four online databases. Functional annotation utilizes five additional comparisons between the query and identified homologs, including: sequence similarity, protein domain architectures, functional motifs, Gene Ontology term assignments and a list of associated articles. Furthermore, the server uses a plurality-based rating system to evaluate the orthology relationships and to rank the reference proteins by their evolutionary and functional relevance to the query. Using a dataset of ∼1 million true yeast orthologs as a sample reference set, we show that combining multiple orthology-prediction tools in ORCAN increases the sensitivity and precision by 1-2 percent points. The service is available for free at http://www.combio.pl/orcan/ . wmk@amu.edu.pl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. EnviroAtlas - NHDPlus V2 Hydrologic Unit Boundaries Web Service - Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service contains layers depicting hydrologic unit boundary layers and labels for the Subregion level (4-digit HUCs), Subbasin level (8-digit...

  20. Use of World Wide Web Server and Browser Software To Support a First-Year Medical Physiology Course.

    Science.gov (United States)

    Davis, Michael J.; And Others

    1997-01-01

    Describes the use of a World Wide Web server to support a team-taught physiology course for first-year medical students. The students' evaluations indicate that computer use in class made lecture material more interesting, while the online documents helped reinforce lecture materials and textbooks. Lists factors which contribute to the…

  1. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  2. Allen Brain Atlas-Driven Visualizations: a web-based gene expression energy visualization tool.

    Science.gov (United States)

    Zaldivar, Andrew; Krichmar, Jeffrey L

    2014-01-01

    The Allen Brain Atlas-Driven Visualizations (ABADV) is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA) across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  3. Allen Brain Atlas-Driven Visualizations: A Web-Based Gene Expression Energy Visualization Tool

    Directory of Open Access Journals (Sweden)

    Andrew eZaldivar

    2014-05-01

    Full Text Available The Allen Brain Atlas-Driven Visualizations (ABADV is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  4. DECENTRALIZED SOCIAL NETWORK SERVICE USING THE WEB HOSTING SERVER FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    Yoonho Nam

    2013-10-01

    Full Text Available In recent years, the number of subscribers of the social network services such as Facebook and Twitter has increased rapidly. In accordance with the increasing popularity of social network services, concerns about user privacy are also growing. Existing social network services have a centralized structure that a service provider collects all the user’s profile and logs until the end of the connection. The information collected typically useful for commercial purposes, but may lead to a serious user privacy violation. The user’s profile can be compromised for malicious purposes, and even may be a tool of surveillance extremely. In this paper, we remove a centralized structure to prevent the service provider from collecting all users’ information indiscriminately, and present a decentralized structure using the web hosting server. The service provider provides only the service applications to web hosting companies, and the user should select a web hosting company that he trusts. Thus, the user’s information is distributed, and the user’s privacy is guaranteed from the service provider.

  5. RV-Typer: A Web Server for Typing of Rhinoviruses Using Alignment-Free Approach.

    Directory of Open Access Journals (Sweden)

    Pandurang S Kolekar

    Full Text Available Rhinoviruses (RV are increasingly being reported to cause mild to severe infections of respiratory tract in humans. RV are antigenically the most diverse species of the genus Enterovirus and family Picornaviridae. There are three species of RV (RV-A, -B and -C, with 80, 32 and 55 serotypes/types, respectively. Antigenic variation is the main limiting factor for development of a cross-protective vaccine against RV.Serotyping of Rhinoviruses is carried out using cross-neutralization assays in cell culture. However, these assays become laborious and time-consuming for the large number of strains. Alternatively, serotyping of RV is carried out by alignment-based phylogeny of both protein and nucleotide sequences of VP1. However, serotyping of RV based on alignment-based phylogeny is a multi-step process, which needs to be repeated every time a new isolate is sequenced. In view of the growing need for serotyping of RV, an alignment-free method based on "return time distribution" (RTD of amino acid residues in VP1 protein has been developed and implemented in the form of a web server titled RV-Typer. RV-Typer accepts nucleotide or protein sequences as an input and computes return times of di-peptides (k = 2 to assign serotypes. The RV-Typer performs with 100% sensitivity and specificity. It is significantly faster than alignment-based methods. The web server is available at http://bioinfo.net.in/RV-Typer/home.html.

  6. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    Science.gov (United States)

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA

  7. TANK INFORMATION SYSTEM (TIS: A CASE STUDY IN MIGRATING WEB MAPPING APPLICATION FROM FLEX TO DOJO FOR ARCGIS SERVER AND THEN TO OPEN SOURCE

    Directory of Open Access Journals (Sweden)

    B. R. Pulsani

    2017-11-01

    Full Text Available Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  8. A GRID-like computing proposal for the Tile calorimeter of the ATLAS experiment

    CERN Document Server

    Maidantchik, C; Lanza, M L D; Santelli, R; Damazio, D O

    2004-01-01

    For the hadronic calorimeter of the ATLAS detector, the TileTransfer has been developed as a Web system to facilitate the transferring of data that are produced during calibration testbeam periods. It automatically searches, stages and provides a link to download the selected data stored at a remote file center. The system has an interface with the Run Info Database, which contains the description of all test beam runs. In order to optimize the file transmission, the system is connected to a central repository that stores information of the latest accesses. Once a client host connects to the TileTransfer, it can become a file server to other users. At the servers, the selected file is split into several pieces and each piece is sent in parallel and built up together in the final destination. TileTransfer allows that the rile administration be geographically distributed, avoiding an overloaded at the central repository. We also foresee the integration with analysis tools by remote Web access and the publicatio...

  9. LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands

    Science.gov (United States)

    Dodda, Leela S.

    2017-01-01

    Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340

  10. Nucleos: a web server for the identification of nucleotide-binding sites in protein structures.

    Science.gov (United States)

    Parca, Luca; Ferré, Fabrizio; Ausiello, Gabriele; Helmer-Citterich, Manuela

    2013-07-01

    Nucleos is a web server for the identification of nucleotide-binding sites in protein structures. Nucleos compares the structure of a query protein against a set of known template 3D binding sites representing nucleotide modules, namely the nucleobase, carbohydrate and phosphate. Structural features, clustering and conservation are used to filter and score the predictions. The predicted nucleotide modules are then joined to build whole nucleotide-binding sites, which are ranked by their score. The server takes as input either the PDB code of the query protein structure or a user-submitted structure in PDB format. The output of Nucleos is composed of ranked lists of predicted nucleotide-binding sites divided by nucleotide type (e.g. ATP-like). For each ranked prediction, Nucleos provides detailed information about the score, the template structure and the structural match for each nucleotide module composing the nucleotide-binding site. The predictions on the query structure and the template-binding sites can be viewed directly on the web through a graphical applet. In 98% of the cases, the modules composing correct predictions belong to proteins with no homology relationship between each other, meaning that the identification of brand-new nucleotide-binding sites is possible using information from non-homologous proteins. Nucleos is available at http://nucleos.bio.uniroma2.it/nucleos/.

  11. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    Science.gov (United States)

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  12. dAPE: a web server to detect homorepeats and follow their evolution.

    Science.gov (United States)

    Mier, Pablo; Andrade-Navarro, Miguel A

    2017-04-15

    Homorepeats are low complexity regions consisting of repetitions of a single amino acid residue. There is no current consensus on the minimum number of residues needed to define a functional homorepeat, nor even if mismatches are allowed. Here we present dAPE, a web server that helps following the evolution of homorepeats based on orthology information, using a sensitive but tunable cutoff to help in the identification of emerging homorepeats. dAPE can be accessed from http://cbdm-01.zdv.uni-mainz.de/∼munoz/polyx . munoz@uni-mainz.de. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been considerably simplified by the development of a framework for AMI based on JQuery and Twitter Bootstrap. Finally there has been a major upgrade of the python web service client.

  14. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    Science.gov (United States)

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  15. iGPCR-drug: a web server for predicting interaction between GPCRs and drugs in cellular networking.

    Directory of Open Access Journals (Sweden)

    Xuan Xiao

    Full Text Available Involved in many diseases such as cancer, diabetes, neurodegenerative, inflammatory and respiratory disorders, G-protein-coupled receptors (GPCRs are among the most frequent targets of therapeutic drugs. It is time-consuming and expensive to determine whether a drug and a GPCR are to interact with each other in a cellular network purely by means of experimental techniques. Although some computational methods were developed in this regard based on the knowledge of the 3D (dimensional structure of protein, unfortunately their usage is quite limited because the 3D structures for most GPCRs are still unknown. To overcome the situation, a sequence-based classifier, called "iGPCR-drug", was developed to predict the interactions between GPCRs and drugs in cellular networking. In the predictor, the drug compound is formulated by a 2D (dimensional fingerprint via a 256D vector, GPCR by the PseAAC (pseudo amino acid composition generated with the grey model theory, and the prediction engine is operated by the fuzzy K-nearest neighbour algorithm. Moreover, a user-friendly web-server for iGPCR-drug was established at http://www.jci-bioinfo.cn/iGPCR-Drug/. For the convenience of most experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated math equations presented in this paper just for its integrity. The overall success rate achieved by iGPCR-drug via the jackknife test was 85.5%, which is remarkably higher than the rate by the existing peer method developed in 2010 although no web server was ever established for it. It is anticipated that iGPCR-Drug may become a useful high throughput tool for both basic research and drug development, and that the approach presented here can also be extended to study other drug - target interaction networks.

  16. Web-based control application using WebSocket

    International Nuclear Information System (INIS)

    Furukawa, Y.

    2012-01-01

    The WebSocket allows asynchronous full-duplex communication between a Web-based (i.e. Java Script-based) application and a Web-server. WebSocket started as a part of HTML5 standardization but has now been separated from HTML5 and has been developed independently. Using WebSocket, it becomes easy to develop platform independent presentation layer applications for accelerator and beamline control software. In addition, a Web browser is the only application program that needs to be installed on client computer. The WebSocket-based applications communicate with the WebSocket server using simple text-based messages, so WebSocket is applicable message-based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application were successfully made as a first trial of the WebSocket control application. Using Google-Chrome (version 13.0) on Debian/Linux and Windows 7, Opera (version 11.0) on Debian/Linux and Safari (version 5.0.3) on Mac OS X as clients, the motors can be controlled using a WebSocket-based Web-application. Diffractometer control application use in synchrotron radiation diffraction experiment was also developed. (author)

  17. AnnoLnc: a web server for systematically annotating novel human lncRNAs.

    Science.gov (United States)

    Hou, Mei; Tang, Xing; Tian, Feng; Shi, Fangyuan; Liu, Fenglin; Gao, Ge

    2016-11-16

    Long noncoding RNAs (lncRNAs) have been shown to play essential roles in almost every important biological process through multiple mechanisms. Although the repertoire of human lncRNAs has rapidly expanded, their biological function and regulation remain largely elusive, calling for a systematic and integrative annotation tool. Here we present AnnoLnc ( http://annolnc.cbi.pku.edu.cn ), a one-stop portal for systematically annotating novel human lncRNAs. Based on more than 700 data sources and various tool chains, AnnoLnc enables a systematic annotation covering genomic location, secondary structure, expression patterns, transcriptional regulation, miRNA interaction, protein interaction, genetic association and evolution. An intuitive web interface is available for interactive analysis through both desktops and mobile devices, and programmers can further integrate AnnoLnc into their pipeline through standard JSON-based Web Service APIs. To the best of our knowledge, AnnoLnc is the only web server to provide on-the-fly and systematic annotation for newly identified human lncRNAs. Compared with similar tools, the annotation generated by AnnoLnc covers a much wider spectrum with intuitive visualization. Case studies demonstrate the power of AnnoLnc in not only rediscovering known functions of human lncRNAs but also inspiring novel hypotheses.

  18. Abdominal aortic aneurysms: virtual imaging and analysis through a remote web server

    International Nuclear Information System (INIS)

    Neri, Emanuele; Bargellini, Irene; Vignali, Claudio; Bartolozzi, Carlo; Rieger, Michael; Jaschke, Werner; Giachetti, Andrea; Tuveri, Massimiliano

    2005-01-01

    The study describes the application of a web-based software in the planning of the endovascular treatment of abdominal aortic aneurysms (AAA). The software has been developed in the framework of a 2-year research project called Aneurysm QUAntification Through an Internet Collaborative System (AQUATICS); it allows to manage remotely Virtual Reality Modeling Language (VRML) models of the abdominal aorta, derived from multirow computed tomography angiography (CTA) data sets, and to obtain measurements of diameters, angles and centerline lengths. To test the reliability of measurements, two radiologists performed a detailed analysis of multiple 3D models generated from a synthetic phantom, mimicking an AAA. The system was tested on 30 patients with AAA; CTA data sets were mailed and the time required for segmentation and measurement were collected for each case. The Bland-Altman plot analysis showed that the mean intra- and inter-observer differences in measures on phantoms were clinically acceptable. The mean time required for segmentation was 1 h (range 45-120 min). The mean time required for measurements on the web was 7 min (range 4-11 min). The AQUATICS web server may provide a rapid, standardized and accurate tool for the evaluation of AAA prior to the endovascular treatment. (orig.)

  19. RNAPattMatch: a web server for RNA sequence/structure motif detection based on pattern matching with flexible gaps

    Science.gov (United States)

    Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619

  20. The ChIP-Seq tools and web server: a resource for analyzing ChIP-seq and other types of genomic data.

    Science.gov (United States)

    Ambrosini, Giovanna; Dreos, René; Kumar, Sunil; Bucher, Philipp

    2016-11-18

    ChIP-seq and related high-throughput chromatin profilig assays generate ever increasing volumes of highly valuable biological data. To make sense out of it, biologists need versatile, efficient and user-friendly tools for access, visualization and itegrative analysis of such data. Here we present the ChIP-Seq command line tools and web server, implementing basic algorithms for ChIP-seq data analysis starting with a read alignment file. The tools are optimized for memory-efficiency and speed thus allowing for processing of large data volumes on inexpensive hardware. The web interface provides access to a large database of public data. The ChIP-Seq tools have a modular and interoperable design in that the output from one application can serve as input to another one. Complex and innovative tasks can thus be achieved by running several tools in a cascade. The various ChIP-Seq command line tools and web services either complement or compare favorably to related bioinformatics resources in terms of computational efficiency, ease of access to public data and interoperability with other web-based tools. The ChIP-Seq server is accessible at http://ccg.vital-it.ch/chipseq/ .

  1. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  2. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    Science.gov (United States)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The

  3. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    Science.gov (United States)

    Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.

  4. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  5. GenomeRunner web server: regulatory similarity and differences define the functional impact of SNP sets.

    Science.gov (United States)

    Dozmorov, Mikhail G; Cara, Lukas R; Giles, Cory B; Wren, Jonathan D

    2016-08-01

    The growing amount of regulatory data from the ENCODE, Roadmap Epigenomics and other consortia provides a wealth of opportunities to investigate the functional impact of single nucleotide polymorphisms (SNPs). Yet, given the large number of regulatory datasets, researchers are posed with a challenge of how to efficiently utilize them to interpret the functional impact of SNP sets. We developed the GenomeRunner web server to automate systematic statistical analysis of SNP sets within a regulatory context. Besides defining the functional impact of SNP sets, GenomeRunner implements novel regulatory similarity/differential analyses, and cell type-specific regulatory enrichment analysis. Validated against literature- and disease ontology-based approaches, analysis of 39 disease/trait-associated SNP sets demonstrated that the functional impact of SNP sets corresponds to known disease relationships. We identified a group of autoimmune diseases with SNPs distinctly enriched in the enhancers of T helper cell subpopulations, and demonstrated relevant cell type-specificity of the functional impact of other SNP sets. In summary, we show how systematic analysis of genomic data within a regulatory context can help interpreting the functional impact of SNP sets. GenomeRunner web server is freely available at http://www.integrativegenomics.org/ mikhail.dozmorov@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. GalaxyHomomer: a web server for protein homo-oligomer structure prediction from a monomer sequence or structure.

    Science.gov (United States)

    Baek, Minkyung; Park, Taeyong; Heo, Lim; Park, Chiwook; Seok, Chaok

    2017-07-03

    Homo-oligomerization of proteins is abundant in nature, and is often intimately related with the physiological functions of proteins, such as in metabolism, signal transduction or immunity. Information on the homo-oligomer structure is therefore important to obtain a molecular-level understanding of protein functions and their regulation. Currently available web servers predict protein homo-oligomer structures either by template-based modeling using homo-oligomer templates selected from the protein structure database or by ab initio docking of monomer structures resolved by experiment or predicted by computation. The GalaxyHomomer server, freely accessible at http://galaxy.seoklab.org/homomer, carries out template-based modeling, ab initio docking or both depending on the availability of proper oligomer templates. It also incorporates recently developed model refinement methods that can consistently improve model quality. Moreover, the server provides additional options that can be chosen by the user depending on the availability of information on the monomer structure, oligomeric state and locations of unreliable/flexible loops or termini. The performance of the server was better than or comparable to that of other available methods when tested on benchmark sets and in a recent CASP performed in a blind fashion. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Interactive atlas using web browser: CT and MRI of the temporal bone

    International Nuclear Information System (INIS)

    Chung, Eun Chul; Youn, Eun Kyung; Lee, Young Uk

    2000-01-01

    The purposes of this study were to construct an interactive atlas of the temporal bone using a web-browser and to provide a template for web-based teaching files, using free and shared applets and scripts on the internet. HRCT and MR images of the temporal bone including its normal anatomy, tumors, trauma, inflammation, anomalies and vascular diseases were used in this study. Acquired radiologic images were transformed to GIF/JPG formats and to achieve appropriate image quality, were retouched. Text and image files of normal anatomy and diseases were written by HTML. JavaScript and applets were inserted in the HTML files for the interactive display of images and texts. In order to review anatomic features and diseases, a search index was also attached to the last part of the file. Using interactive images and text, temporal bone anatomy and disorders were displayed. Scripts and applets were also useful for indicating specific points of interest when a mouse was placed over the anatomic sites. The atlas may be viewed in the form of a CD-ROM, or via the internet using any computer platform or web-browser. This web-based teaching file of the temporal bone offers dynamic and interactive education. It can be usefully employed as a template for the production of interactive educational materials, offering JavaScript and providing suitable input for classes. It can replace texts and imaging contents. (author)

  8. Analisis Kelayakan Sistem Monitoring dan Kontrol Lampu Menggunakan Web Server Berbasis Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Maslan - Maslan

    2017-09-01

    Full Text Available As technology develops progressively over time, system developers continue to strive to create more efficient monitoring systems. The problem that often happens is the control room and light control is currently not utilize an integrated system based web server. Room control systems still utilize large equipment such as televisions and large computers, so that every problem occurs in the room is difficult to control. From these problems then the purpose of this study to create an efficient control system that utilizes super mini-based Raspberry Pi. To create a system based on Raspberry P microcontroller then required an initial observation to perform the feasibility test on the system to be developed. The feasibility test system is the successful monitoring of the room using CCTV (Closed Circuit Television equipment and control the lamp automatically by using prototyping method. The CCTV feasibility test is viewed from the resolution while the control of room light control is seen from the success of controlling the life of the lights on the test. Based on the testing concluded that the monitoring of the room using Raspberry Pi-based web server is feasible to use, seen the test results when the light control goes smoothly, because the lights controlled through the webserver managed to live and die. Similarly, when monitoring the room with live CCTV also runs well. And while conducting Room monitoring experiments with Webcam at 800 x 600 resolution there is a lot of variation value of frame rate sometimes up and down. The up and down frame is caused by an unstable network connection.

  9. The research and implementation of coalfield spontaneous combustion of carbon emission WebGIS based on Silverlight and ArcGIS server

    International Nuclear Information System (INIS)

    Zhu, Z; Bi, J; Wang, X; Zhu, W

    2014-01-01

    As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data

  10. COGNAC: a web server for searching and annotating hydrogen-bonded base interactions in RNA three-dimensional structures.

    Science.gov (United States)

    Firdaus-Raih, Mohd; Hamdani, Hazrina Yusof; Nadzirin, Nurul; Ramlan, Effirul Ikhwan; Willett, Peter; Artymiuk, Peter J

    2014-07-01

    Hydrogen bonds are crucial factors that stabilize a complex ribonucleic acid (RNA) molecule's three-dimensional (3D) structure. Minute conformational changes can result in variations in the hydrogen bond interactions in a particular structure. Furthermore, networks of hydrogen bonds, especially those found in tight clusters, may be important elements in structure stabilization or function and can therefore be regarded as potential tertiary motifs. In this paper, we describe a graph theoretical algorithm implemented as a web server that is able to search for unbroken networks of hydrogen-bonded base interactions and thus provide an accounting of such interactions in RNA 3D structures. This server, COGNAC (COnnection tables Graphs for Nucleic ACids), is also able to compare the hydrogen bond networks between two structures and from such annotations enable the mapping of atomic level differences that may have resulted from conformational changes due to mutations or binding events. The COGNAC server can be accessed at http://mfrlab.org/grafss/cognac. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    Science.gov (United States)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  12. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    Science.gov (United States)

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  13. Implementasi Cluster Server pada Raspberry Pi dengan Menggunakan Metode Load Balancing

    Directory of Open Access Journals (Sweden)

    Ridho Habi Putra

    2016-06-01

    Full Text Available Server merupakan bagian penting dalam sebuah layanan didalam jaringan komputer. Peran server dapat menentukan kualitas baik buruknya dari layanan tersebut. Kegagalan dari sebuah server bisa disebabkan oleh beberapa faktor diantaranya kerusakan perangkat keras, sistem jaringan serta aliran listrik. Salah satu solusi untuk mengatasi kegagalan server dalam suatu jaringan komputer adalah dengan melakukan clustering server.  Tujuan dari penelitian ini adalah untuk mengukur kemampuan Raspberry Pi (Raspi digunakan sebagai web server. Raspberry Pi yang digunakan menggunakan Raspberry Pi 2 Model B dengan menggunakan processor ARM Cortex-A7 berjalan pada frekuensi 900MHz dengan memiliki RAM 1GB. Sistem operasi yang digunakan pada Raspberry Pi adalah Linux Debian Wheezy. Konsep penelitian ini menggunakan empat buah perangkat Raspberry Pi dimana dua Raspi digunakan sebagai web server dan dua Raspi lainnya digunakan sebagai penyeimbang beban (Load Balancer serta database server. Metode yang digunakan dalam pembangunan cluster server ini menggunakan metode load balancing, dimana beban server bekerja secara merata di masing-masing node. Pengujian yang diterapkan dengan melakukan perbandingan kinerja dari Raspbery Pi yang menangani lalu lintas data secara tunggal tanpa menggunakan load balancer serta pengujian Raspberry Pi dengan menggunakan load balancer sebagai beban penyeimbang antara anggota cluster server.

  14. UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.

    Science.gov (United States)

    Meinicke, Peter

    2009-09-02

    Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.

  15. UFO: a web server for ultra-fast functional profiling of whole genome protein sequences

    Directory of Open Access Journals (Sweden)

    Meinicke Peter

    2009-09-01

    Full Text Available Abstract Background Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Description Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. Conclusion For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.

  16. ATLAS Detector Control System Data Viewer

    CERN Document Server

    Tsarouchas, Charilaos; Roe, S; Bitenc, U; Fehling-Kaschek, ML; Winkelmann, S; D’Auria, S; Hoffmann, D; Pisano, O

    2011-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System [1] (DCS) parameters written to the database (DB). It has a modular andflexible design and is structured using a clientserver architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as “value over time” charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML con...

  17. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    Science.gov (United States)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  18. BioAtlas: Interactive web service for microbial distribution analysis

    DEFF Research Database (Denmark)

    Lund, Jesper; List, Markus; Baumbach, Jan

    Massive amounts of 16S rRNA sequencing data have been stored in publicly accessible databases, such as GOLD, SILVA, GreenGenes (GG), and the Ribosomal Database Project (RDP). Many of these sequences are tagged with geo-locations. Nevertheless, researchers currently lack a user-friendly tool...... to analyze microbial distribution in a location-specific context. BioAtlas is an interactive web application that closes this gap between sequence databases, taxonomy profiling and geo/body-location information. It enables users to browse taxonomically annotated sequences across (i) the world map, (ii) human...

  19. Distill: a suite of web servers for the prediction of one-, two- and three-dimensional structural features of proteins

    Directory of Open Access Journals (Sweden)

    Walsh Ian

    2006-09-01

    Full Text Available Abstract Background We describe Distill, a suite of servers for the prediction of protein structural features: secondary structure; relative solvent accessibility; contact density; backbone structural motifs; residue contact maps at 6, 8 and 12 Angstrom; coarse protein topology. The servers are based on large-scale ensembles of recursive neural networks and trained on large, up-to-date, non-redundant subsets of the Protein Data Bank. Together with structural feature predictions, Distill includes a server for prediction of Cα traces for short proteins (up to 200 amino acids. Results The servers are state-of-the-art, with secondary structure predicted correctly for nearly 80% of residues (currently the top performance on EVA, 2-class solvent accessibility nearly 80% correct, and contact maps exceeding 50% precision on the top non-diagonal contacts. A preliminary implementation of the predictor of protein Cα traces featured among the top 20 Novel Fold predictors at the last CASP6 experiment as group Distill (ID 0348. The majority of the servers, including the Cα trace predictor, now take into account homology information from the PDB, when available, resulting in greatly improved reliability. Conclusion All predictions are freely available through a simple joint web interface and the results are returned by email. In a single submission the user can send protein sequences for a total of up to 32k residues to all or a selection of the servers. Distill is accessible at the address: http://distill.ucd.ie/distill/.

  20. EnviroAtlas - Ecosystem Services Market-Based Programs Web Service, U.S., 2016, Forest Trends' Ecosystem Marketplace

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service contains layers depicting market-based programs and projects addressing ecosystem services protection in the United States. Layers...

  1. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    Science.gov (United States)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  2. The Geogenomic Mutational Atlas of Pathogens (GoMAP web system.

    Directory of Open Access Journals (Sweden)

    David P Sargeant

    Full Text Available We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼ 502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/

  3. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    Science.gov (United States)

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A Python extension to the ATLAS online software for the thin gap chamber trigger system

    CERN Document Server

    Maeno, Tadashi; Komatsu, Satoru; Nakayoshi, Kazuo; Yasu, Yoshiji

    2004-01-01

    A Python extension module for A Toroidal LHC Apparatus (ATLAS) Online Software has been developed for the Thin Gap Chamber (TGC) trigger system. Python is an interactive scripting language including built- in high-level libraries, and provides an easy way to build Web applications. These features are not included in the Online Software, and are important in developing test software for the TGC trigger system. The Python extension module is designed and implemented using a C++ library, "Boost.Python." We have developed a Web application using the extension module and Zope (a Python-based Web application server), which allows one to monitor the TGC trigger system from anywhere in the world. The functionalities of the Python extension module and its application for the TGC trigger system are presented. 7 Refs.

  5. BAGEL4: a user-friendly web server to thoroughly mine RiPPs and bacteriocins.

    Science.gov (United States)

    van Heel, Auke J; de Jong, Anne; Song, Chunxu; Viel, Jakob H; Kok, Jan; Kuipers, Oscar P

    2018-05-21

    Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.

  6. CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy

    Directory of Open Access Journals (Sweden)

    Guanghong Zuo

    2015-10-01

    Full Text Available A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements.

  7. CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy.

    Science.gov (United States)

    Zuo, Guanghong; Hao, Bailin

    2015-10-01

    A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  8. iELM—a web server to explore short linear motif-mediated interactions

    Science.gov (United States)

    Weatheritt, Robert J.; Jehl, Peter; Dinkel, Holger; Gibson, Toby J.

    2012-01-01

    The recent expansion in our knowledge of protein–protein interactions (PPIs) has allowed the annotation and prediction of hundreds of thousands of interactions. However, the function of many of these interactions remains elusive. The interactions of Eukaryotic Linear Motif (iELM) web server provides a resource for predicting the function and positional interface for a subset of interactions mediated by short linear motifs (SLiMs). The iELM prediction algorithm is based on the annotated SLiM classes from the Eukaryotic Linear Motif (ELM) resource and allows users to explore both annotated and user-generated PPI networks for SLiM-mediated interactions. By incorporating the annotated information from the ELM resource, iELM provides functional details of PPIs. This can be used in proteomic analysis, for example, to infer whether an interaction promotes complex formation or degradation. Furthermore, details of the molecular interface of the SLiM-mediated interactions are also predicted. This information is displayed in a fully searchable table, as well as graphically with the modular architecture of the participating proteins extracted from the UniProt and Phospho.ELM resources. A network figure is also presented to aid the interpretation of results. The iELM server supports single protein queries as well as large-scale proteomic submissions and is freely available at http://i.elm.eu.org. PMID:22638578

  9. Personalized Pseudonyms for Servers in the Cloud

    OpenAIRE

    Xiao Qiuyu; Reiter Michael K.; Zhang Yinqian

    2017-01-01

    A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”), ...

  10. GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.

    Science.gov (United States)

    Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia

    2017-10-01

    Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.

  11. ATLAS software stack on ARM64

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00529764; The ATLAS collaboration; Stewart, Graeme; Seuster, Rolf; Quadt, Arnulf

    2017-01-01

    This paper reports on the port of the ATLAS software stack onto new prototype ARM64 servers. This included building the “external” packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adjustments. A few additional modifications were needed to account for the different operating system, Ubuntu instead of Scientific Linux 6 / CentOS7. Selected results from the validation of the physics outputs on these ARM 64-bit servers will be shown. CPU, memory and IO intensive benchmarks using ATLAS specific environment and infrastructure have been performed, with a particular emphasis on the performance vs. energy consumption.

  12. ATLAS software stack on ARM64

    Science.gov (United States)

    Smith, Joshua Wyatt; Stewart, Graeme A.; Seuster, Rolf; Quadt, Arnulf; ATLAS Collaboration

    2017-10-01

    This paper reports on the port of the ATLAS software stack onto new prototype ARM64 servers. This included building the “external” packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adjustments. A few additional modifications were needed to account for the different operating system, Ubuntu instead of Scientific Linux 6 / CentOS7. Selected results from the validation of the physics outputs on these ARM 64-bit servers will be shown. CPU, memory and IO intensive benchmarks using ATLAS specific environment and infrastructure have been performed, with a particular emphasis on the performance vs. energy consumption.

  13. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    Science.gov (United States)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be

  14. RS-WebPredictor

    DEFF Research Database (Denmark)

    Zaretzki, J.; Bergeron, C.; Huang, T.-W.

    2013-01-01

    Regioselectivity-WebPredictor (RS-WebPredictor) is a server that predicts isozyme-specific cytochrome P450 (CYP)-mediated sites of metabolism (SOMs) on drug-like molecules. Predictions may be made for the promiscuous 2C9, 2D6 and 3A4 CYP isozymes, as well as CYPs 1A2, 2A6, 2B6, 2C8, 2C19 and 2E1....... RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects.......Availability: RS-WebPredictor is accessible for free use at http://reccr.chem.rpi.edu/ Software/RS-WebPredictor....

  15. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  16. Experience with ATLAS MySQL PanDA database service

    International Nuclear Information System (INIS)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D; De, K; Ozturk, N

    2010-01-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  17. Experience with ATLAS MySQL PanDA database service

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D [Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); De, K; Ozturk, N [Department of Physics, University of Texas at Arlington, Arlington, TX, 76019 (United States)

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  18. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  19. MetCCS predictor: a web server for predicting collision cross-section values of metabolites in ion mobility-mass spectrometry based metabolomics.

    Science.gov (United States)

    Zhou, Zhiwei; Xiong, Xin; Zhu, Zheng-Jiang

    2017-07-15

    In metabolomics, rigorous structural identification of metabolites presents a challenge for bioinformatics. The use of collision cross-section (CCS) values of metabolites derived from ion mobility-mass spectrometry effectively increases the confidence of metabolite identification, but this technique suffers from the limit number of available CCS values. Currently, there is no software available for rapidly generating the metabolites' CCS values. Here, we developed the first web server, namely, MetCCS Predictor, for predicting CCS values. It can predict the CCS values of metabolites using molecular descriptors within a few seconds. Common users with limited background on bioinformatics can benefit from this software and effectively improve the metabolite identification in metabolomics. The web server is freely available at: http://www.metabolomics-shanghai.org/MetCCS/ . jiangzhu@sioc.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. Monitoring and controlling ATLAS data management: The Rucio web user interface

    OpenAIRE

    Lassnig, Mario; Beermann, Thomas Alfons; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three des...

  1. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  2. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    Science.gov (United States)

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  3. Pàgina Web gestió protectora d'animals

    OpenAIRE

    Segura Valls, Marc

    2011-01-01

    Aplicació Web amb ASP visual studio 2010 i Visual C# , Microsoft SQL Server i AJAX. Aplicación Web con ASP visual studio 2010 y Visual C# , Microsoft SQL Server y AJAX. ASP Web Application with Visual Studio 2010 and Visual C #, Microsoft SQL Server and AJAX.

  4. ngLOC: software and web server for predicting protein subcellular localization in prokaryotes and eukaryotes

    Directory of Open Access Journals (Sweden)

    King Brian R

    2012-07-01

    Full Text Available Abstract Background Understanding protein subcellular localization is a necessary component toward understanding the overall function of a protein. Numerous computational methods have been published over the past decade, with varying degrees of success. Despite the large number of published methods in this area, only a small fraction of them are available for researchers to use in their own studies. Of those that are available, many are limited by predicting only a small number of organelles in the cell. Additionally, the majority of methods predict only a single location for a sequence, even though it is known that a large fraction of the proteins in eukaryotic species shuttle between locations to carry out their function. Findings We present a software package and a web server for predicting the subcellular localization of protein sequences based on the ngLOC method. ngLOC is an n-gram-based Bayesian classifier that predicts subcellular localization of proteins both in prokaryotes and eukaryotes. The overall prediction accuracy varies from 89.8% to 91.4% across species. This program can predict 11 distinct locations each in plant and animal species. ngLOC also predicts 4 and 5 distinct locations on gram-positive and gram-negative bacterial datasets, respectively. Conclusions ngLOC is a generic method that can be trained by data from a variety of species or classes for predicting protein subcellular localization. The standalone software is freely available for academic use under GNU GPL, and the ngLOC web server is also accessible at http://ngloc.unmc.edu.

  5. The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

    International Nuclear Information System (INIS)

    Miotto, Giovanna Lehmann; Magnoni, Luca; Sloper, John Erik

    2011-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing system composed of thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify the correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed of two main entities that reflect the twofold scope of the application. The first is the engine, a Java service that performs aggregation, processing and filtering of real time data stream and computes statistical correlation on sliding windows of time. The results are made available to clients via a simple web interface supporting SQL-like query syntax. The second is the visualization, provided by an Ajax-based web application that runs on client's browser. The dashboard approach allows to present information in a clear and customizable structure. Several types of interactive graphs are proposed as widgets that can be dynamically added and removed from visualization panels. Each widget acts as a client for the engine, querying the web interface to retrieve data with desired criteria. In this paper we present the design, development and evolution of the TDAQ Analytics Dashboard. We also present the statistical analysis computed by the application in this first period of high energy data taking operations for the ATLAS experiment.

  6. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    Energy Technology Data Exchange (ETDEWEB)

    Roe, S A, E-mail: shaun.roe@cern.c [CERN, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  7. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  8. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    CERN Document Server

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Sem...

  9. The library without walls: images, medical dictionaries, atlases, medical encyclopedias free on web.

    Science.gov (United States)

    Giglia, E

    2008-09-01

    The aim of this article was to present the ''reference room'' of the Internet, a real library without walls. The reader will find medical encyclopedias, dictionaries, atlases, e-books, images, and will also learn something useful about the use and reuse of images in a text and in a web site, according to the copyright law.

  10. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  11. InterProSurf: a web server for predicting interacting sites on protein surfaces

    Science.gov (United States)

    Negi, Surendra S.; Schein, Catherine H.; Oezguen, Numan; Power, Trevor D.; Braun, Werner

    2009-01-01

    Summary A new web server, InterProSurf, predicts interacting amino acid residues in proteins that are most likely to interact with other proteins, given the 3D structures of subunits of a protein complex. The prediction method is based on solvent accessible surface area of residues in the isolated subunits, a propensity scale for interface residues and a clustering algorithm to identify surface regions with residues of high interface propensities. Here we illustrate the application of InterProSurf to determine which areas of Bacillus anthracis toxins and measles virus hemagglutinin protein interact with their respective cell surface receptors. The computationally predicted regions overlap with those regions previously identified as interface regions by sequence analysis and mutagenesis experiments. PMID:17933856

  12. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  13. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  14. A GRID-type computation for the tile calorimeter of the ATLAS experiment at CERN

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Seixas, Jose Manoel de; Lanza, Marcelo Luiz Drumond; Santelli, Rafael

    2002-01-01

    For the hadronic calorimeter of ATLAS, the tile transfer has been developed as a Web system to facilitate the transferring of data to that are produced during calibration test beam periods. It automatically searches, stages and provides a link to download the selected data stored at a remote file center. The system has an interface with the Run Info Database, which contains the description of all test beam runs. It is also possible to receive a link to the files by e-mail, avoiding waiting time until the process is finished. In order to optimize the file transmission, the system is connected to a central repository that stores information of the latest accesses. Once a user connects to the tile transfer, he/she can become a file server to other users. Thus, at different servers, the selected file is split into several pieces. Each piece is sent from one server in parallel and built up together in the final destination. We are currently working in the 2.0 version, dealing with security and efficiency requirements. The whole system runs under the Web and it was developed in C language, Php and Java Script. Tile Transfer allows that the file administration be geographically distributed, avoiding an overloaded at the central repository. We also foresee the integration with analysis tools by remote Web access and the publication of the results to the whole community. Among the benefits of this proposal, one can underline an effective management of data across the Net of users. (author)

  15. SG-ADVISER mtDNA: a web server for mitochondrial DNA annotation with data from 200 samples of a healthy aging cohort.

    Science.gov (United States)

    Rueda, Manuel; Torkamani, Ali

    2017-08-18

    Whole genome and exome sequencing usually include reads containing mitochondrial DNA (mtDNA). Yet, state-of-the-art pipelines and services for human nuclear genome variant calling and annotation do not handle mitochondrial genome data appropriately. As a consequence, any researcher desiring to add mtDNA variant analysis to their investigations is forced to explore the literature for mtDNA pipelines, evaluate them, and implement their own instance of the desired tool. This task is far from trivial, and can be prohibitive for non-bioinformaticians. We have developed SG-ADVISER mtDNA, a web server to facilitate the analysis and interpretation of mtDNA genomic data coming from next generation sequencing (NGS) experiments. The server was built in the context of our SG-ADVISER framework and on top of the MtoolBox platform (Calabrese et al., Bioinformatics 30(21):3115-3117, 2014), and includes most of its functionalities (i.e., assembly of mitochondrial genomes, heteroplasmic fractions, haplogroup assignment, functional and prioritization analysis of mitochondrial variants) as well as a back-end and a front-end interface. The server has been tested with unpublished data from 200 individuals of a healthy aging cohort (Erikson et al., Cell 165(4):1002-1011, 2016) and their data is made publicly available here along with a preliminary analysis of the variants. We observed that individuals over ~90 years old carried low levels of heteroplasmic variants in their genomes. SG-ADVISER mtDNA is a fast and functional tool that allows for variant calling and annotation of human mtDNA data coming from NGS experiments. The server was built with simplicity in mind, and builds on our own experience in interpreting mtDNA variants in the context of sudden death and rare diseases. Our objective is to provide an interface for non-bioinformaticians aiming to acquire (or contrast) mtDNA annotations via MToolBox. SG-ADVISER web server is freely available to all users at https://genomics.scripps.edu/mtdna .

  16. The HydroServer Platform for Sharing Hydrologic Data

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  17. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web

  18. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    International Nuclear Information System (INIS)

    Sivolella, A; Maidantchik, C; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal) is one of the ATLAS sub-detectors. The read-out is performed by about 10,000 PhotoMultiplier Tubes (PMTs). The signal of each PMT is digitized by an electronic channel. The Monitoring and Calibration Web System (MCWS) supports the data quality analysis of the electronic channels. This application was developed to assess the detector status and verify its performance. It can provide to the user the list of TileCal known problematic channels, that is stored in the ATLAS condition database (COOL DB). The bad channels list guides the data quality validator in identifying new problematic channels and is used in data reconstruction and the system allows to update the channels list directly in the COOL database. MCWS can generate summary results, such as eta-phi plots and comparative tables of the masked channels percentage. Regularly, during the LHC (Large Hadron Collider) shutdown a maintenance of the detector equipments is performed. When a channel is repaired, its calibration constants stored in the COOL database have to be updated. Additionally MCWS system manages the update of these calibration constants values in the COOL database. The MCWS has been used by the Tile community since 2008, during the commissioning phase, and was upgraded to comply with ATLAS operation specifications. Among its future developments, it is foreseen an integration of MCWS with the TileCal control Web system (DCS) in order to identify high voltage problems automatically.

  19. Integration of ROOT Notebooks as a Web-based ATLAS Analysis tool for Public Data Releases and Outreach

    CERN Document Server

    Abah, Anthony

    2016-01-01

    The project worked on the development of a physics analysis and its software under ROOT framework and Jupyter notebooks for the the ATLAS Outreach and the Naples teams. This analysis is created in the context of the release of data and Monte Carlo samples by the ATLAS collaboration. The project focuses on the enhancement of the recent opendata.atlas.cern web platform to be used as educational resources for university students and new researches. The generated analysis structure and tutorials will be used to extend the participation of students from other locations around the World. We conclude the project with the creation of a complete notebook representing the so-called W analysis in C + + language for the mentioned platform.

  20. Research and implementation of a Web-based remote desktop image monitoring system

    International Nuclear Information System (INIS)

    Ren Weijuan; Li Luofeng; Wang Chunhong

    2010-01-01

    It studied and implemented an ISS (Image Snapshot Server) system based on Web, using Java Web technology. The ISS system consisted of client web browser and server. The server part could be divided into three modules as the screen shots software, web server and Oracle database. Screen shots software intercepted the desktop environment of the remote monitored PC and sent these pictures to a Tomcat web server for displaying on the web at real time. At the same time, these pictures were also saved in an Oracle database. Through the web browser, monitor person can view the real-time and historical desktop pictures of the monitored PC during some period. It is very convenient for any user to monitor the desktop image of remote monitoring PC. (authors)

  1. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    Science.gov (United States)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  2. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    CERN Document Server

    Goldfarb, Steven; Phoboo, Abha Eli; Shaw, Kate

    2015-01-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and th...

  3. Basic Static Code Analysis Untuk Mendeteksi Backdoor Shell Pada Web Server

    Directory of Open Access Journals (Sweden)

    Nelly Indriani Widiastuti

    2017-05-01

    Full Text Available Mengakses  sistem komputer tanpa ijin merupakan kejahatan yang dilakukan dengan memasuki atau menyusup kedalam suatu sistem jaringan komputer tanpa sepengetahuan dari pemilik sistem tersebut. Kejahatan  tersebut bertujuan untuk mengintai atau mencuri informasi penting dan rahasia. Dalam praktiknya peretas menyisipkan berkas backdoor shell pada lokasi yang sulit ditemukan oleh pemilik sistem. Beberapa perangkat yang sudah ada masih dalam bentuk terminal. Perangkat tersebut melakukan pencarian berkas berdasarkan nama-nama yang telah terdaftar sebelumnya. Akibatnya, pada saat berkas backdoor shell  jenis baru menginfeksi, tools tersebut tidak dapat mendeteksi keberadaannya. Berdasarkan hal tersebut, maka dalam penelitian ini pencarian backdoor shell pada web server menggunakan metode basic static code analysis. File sistem diproses melalui dua tahap utama yaitu string matching dan taint analysis. Dalam proses taint analysis, sistem menghitung peluang kemungkinan setiap signature sebagai backdoor untuk mengatasi kamus backdoor yang tidak lengkap. Berdasarkan  hasil yang didapat dari pengujian yang dilakukan terhadap 3964 berkas diperoleh tingkat akurasi  yang lebih besar dibandingkan dengan aplikasi php shell detector sebesar 75%.

  4. Penerapan Pengujian Keamanan Web Server Menggunakan Metode OWASP versi 4 (Studi Kasus Web Server Ujian Online

    Directory of Open Access Journals (Sweden)

    Mohammad Muhsin

    2016-05-01

    Full Text Available Fakultas Teknik Universitas Muhammadiyah Ponorogo telah menerapkan Ujian Tengah Semester dan Ujian Akhir Semester menggunakan aplikasi Si Ujo (Sistem Ujian Online berbasis web. Sejak tahun 2012 sampai tahun 2014, Si Ujo telah beberapa kali mengalami pengembangan baik dari sisi fitur maupun data yang disimpan. Data tersebut menyimpan data nilai matakuliah setiap mahasiswa teknik Universitas Muhammadiyah Ponorogo. Mengingat pentingnya data yang tersimpan maka perlu diterapkan pengujian keamanan dari aplikasi Si Ujo. Pengujian keamanan tersebut dilakukan untuk mengetahui tingkat kerentanan agar terhindar dari serangan dari pihak yang tidak bertanggung jawab. Salah satu metode untuk menguji aplikasi berbasis web adalah metode OWASP (Open Web Application Security Project versi 4 yang dikeluarkan oleh owasp.org sebuah organisasi non profit yang berdedikasi pada keamanan aplikasi berbasis web. Hasil pengujian menggunakan OWASP versi 4 menunjukkan bahwa manajemen otentifikasi, otorisasi dan manajemen sesi belum diimplementasikan dengan baik sehingga perlu dilakukan perbaikan lebih lanjut oleh pihak stake holder Fakul tas Teknik Universitas Muhammadiyah Ponorogo

  5. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-01-01

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  6. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong

    2018-05-20

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  7. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    Science.gov (United States)

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  8. Energy-efficient server management; Energieeffizientes Servermanagement

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, B.

    2003-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a project that aimed to develop an automatic shut-down system for the servers used in typical electronic data processing installations to be found in small and medium-sized enterprises. The purpose of shutting down these computers - the saving of energy - is discussed. The development of a shutdown unit on the basis of a web-server that automatically shuts down the servers connected to it and then interrupts their power supply is described. The functions of the unit, including pre-set times for switching on and off, remote operation via the Internet and its interaction with clients connected to it are discussed. Examples of the system's user interface are presented.

  9. A Server-Client-Based Graphical Development Environment for Physics Analyses (VISPA)

    International Nuclear Information System (INIS)

    Bretz, H-P; Erdmann, M; Fischer, R; Hinzmann, A; Klingebiel, D; Komm, M; Müller, G; Rieger, M; Steffens, J; Steggemann, J; Urban, M; Winchen, T

    2012-01-01

    The Visual Physics Analysis (VISPA) project provides a graphical development environment for data analysis. It addresses the typical development cycle of (re-)designing, executing, and verifying an analysis. We present the new server-client-based web application of the VISPA project to perform physics analyses via a standard internet browser. This enables individual scientists to work with a large variety of devices including touch screens, and teams of scientists to share, develop, and execute analyses on a server via the web interface.

  10. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    Science.gov (United States)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  12. CavityPlus: a web server for protein cavity detection with pharmacophore modelling, allosteric site identification and covalent ligand binding ability prediction.

    Science.gov (United States)

    Xu, Youjun; Wang, Shiwei; Hu, Qiwan; Gao, Shuaishi; Ma, Xiaomin; Zhang, Weilin; Shen, Yihang; Chen, Fangjin; Lai, Luhua; Pei, Jianfeng

    2018-05-10

    CavityPlus is a web server that offers protein cavity detection and various functional analyses. Using protein three-dimensional structural information as the input, CavityPlus applies CAVITY to detect potential binding sites on the surface of a given protein structure and rank them based on ligandability and druggability scores. These potential binding sites can be further analysed using three submodules, CavPharmer, CorrSite, and CovCys. CavPharmer uses a receptor-based pharmacophore modelling program, Pocket, to automatically extract pharmacophore features within cavities. CorrSite identifies potential allosteric ligand-binding sites based on motion correlation analyses between cavities. CovCys automatically detects druggable cysteine residues, which is especially useful to identify novel binding sites for designing covalent allosteric ligands. Overall, CavityPlus provides an integrated platform for analysing comprehensive properties of protein binding cavities. Such analyses are useful for many aspects of drug design and discovery, including target selection and identification, virtual screening, de novo drug design, and allosteric and covalent-binding drug design. The CavityPlus web server is freely available at http://repharma.pku.edu.cn/cavityplus or http://www.pkumdl.cn/cavityplus.

  13. Report to users of Atlas

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1996-06-01

    This report contains the following topics: Status of the ATLAS Accelerator; Highlights of Recent Research at ATLAS; Program Advisory Committee; ATLAS User Group Executive Committee; FMA Information Available On The World Wide Web; Conference on Nuclear Structure at the Limits; and Workshop on Experiments with Gammasphere at ATLAS

  14. The next generation of the ATLAS PanDA Monitoring System

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Love, P; Potekhin, M; Wenaus, T

    2014-01-01

    For many years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, with up to 1M completed jobs/day in 2013. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. Outside of ATLAS, the PanDA system is also being used in projects like AMS, LSST and a few others. It currently undergoes a significant redesign, both of the core server components responsible for workload management, brokerage and data access, and of the monitoring part, which is critically important for efficient execution of the workflow in a way that’s transparent to the user and also provides an effective set of tools for operational support. The new generation of the PanDA Monitoring Service is designed based on a proven, scalable, industry-standard Web Fr...

  15. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    Science.gov (United States)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  16. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  17. Research on Web-Based Networked Virtual Instrument System

    International Nuclear Information System (INIS)

    Tang, B P; Xu, C; He, Q Y; Lu, D

    2006-01-01

    The web-based networked virtual instrument (NVI) system is designed by using the object oriented methodology (OOM). The architecture of the NVI system consists of two major parts: client-web server interaction and instrument server-virtual instrument (VI) communication. The web server communicates with the instrument server and the clients connected to it over the Internet, and it handles identifying the user's name, managing the connection between the user and the instrument server, adding, removing and configuring VI's information. The instrument server handles setting the parameters of VI, confirming the condition of VI and saving the VI's condition information into the database. The NVI system is required to be a general-purpose measurement system that is easy to maintain, adapt and extend. Virtual instruments are connected to the instrument server and clients can remotely configure and operate these virtual instruments. An application of The NVI system is given in the end of the paper

  18. LabKey Server: An open source platform for scientific data integration, analysis and collaboration

    Science.gov (United States)

    2011-01-01

    Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350

  19. LabKey Server: An open source platform for scientific data integration, analysis and collaboration

    Directory of Open Access Journals (Sweden)

    Lum Karl

    2011-03-01

    Full Text Available Abstract Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i Submitting specimens requests across collaborating organizations (ii Graphically defining new experimental data types, metadata and wizards for data collection (iii Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v Interacting dynamically with external data sources (vi Tracking study participants and cohorts over time (vii Developing custom interfaces using client libraries (viii Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36

  20. LabKey Server: an open source platform for scientific data integration, analysis and collaboration.

    Science.gov (United States)

    Nelson, Elizabeth K; Piehler, Britt; Eckels, Josh; Rauch, Adam; Bellew, Matthew; Hussey, Peter; Ramsay, Sarah; Nathe, Cory; Lum, Karl; Krouse, Kevin; Stearns, David; Connolly, Brian; Skillman, Tom; Igra, Mark

    2011-03-09

    Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks

  1. WebGimm: An integrated web-based platform for cluster analysis, functional analysis, and interactive visualization of results.

    Science.gov (United States)

    Joshi, Vineet K; Freudenberg, Johannes M; Hu, Zhen; Medvedovic, Mario

    2011-01-17

    Cluster analysis methods have been extensively researched, but the adoption of new methods is often hindered by technical barriers in their implementation and use. WebGimm is a free cluster analysis web-service, and an open source general purpose clustering web-server infrastructure designed to facilitate easy deployment of integrated cluster analysis servers based on clustering and functional annotation algorithms implemented in R. Integrated functional analyses and interactive browsing of both, clustering structure and functional annotations provides a complete analytical environment for cluster analysis and interpretation of results. The Java Web Start client-based interface is modeled after the familiar cluster/treeview packages making its use intuitive to a wide array of biomedical researchers. For biomedical researchers, WebGimm provides an avenue to access state of the art clustering procedures. For Bioinformatics methods developers, WebGimm offers a convenient avenue to deploy their newly developed clustering methods. WebGimm server, software and manuals can be freely accessed at http://ClusterAnalysis.org/.

  2. ProBiS tools (algorithm, database, and web servers) for predicting and modeling of biologically interesting proteins.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2017-09-01

    ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Management van World-Wide Web Servers

    NARCIS (Netherlands)

    van Hengstum, F.P.H.; Pras, Aiko

    1996-01-01

    Het World Wide Web is een populaire Internet toepassing waarmee het mogelijk is documenten aan willekeurige Internet gebruikers aan te bieden. Omdat hiervoor nog geen voorzieningen zijn getroffen, was het tot voor kort niet goed mogelijk het World Wide Web op afstand te beheren. De Universiteit

  4. ATLAS Offline Data Quality Monitoring

    CERN Document Server

    Adelman, J; Boelaert, N; D'Onofrio, M; Frost, J A; Guyot, C; Hauschild, M; Hoecker, A; Leney, K J C; Lytken, E; Martinez-Perez, M; Masik, J; Nairz, A M; Onyisi, P U E; Roe, S; Schatzel, S; Schaetzel, S; Wilson, M G

    2010-01-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  5. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  6. Web proxy cache replacement strategies simulation, implementation, and performance evaluation

    CERN Document Server

    ElAarag, Hala; Cobb, Jake

    2013-01-01

    This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review o

  7. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  8. Performance Characteristics of Mirror Servers on the Internet

    National Research Council Canada - National Science Library

    Meyers, Andy

    1998-01-01

    ... retrieved. In this paper we present findings from measuring 9 clients scattered throughout the United States retrieving over 490,000 documents from 45 production web servers which mirror three different sites...

  9. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    Science.gov (United States)

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  10. miRQuest: integration of tools on a Web server for microRNA research.

    Science.gov (United States)

    Aguiar, R R; Ambrosio, L A; Sepúlveda-Hermosilla, G; Maracaja-Coutinho, V; Paschoal, A R

    2016-03-28

    This report describes the miRQuest - a novel middleware available in a Web server that allows the end user to do the miRNA research in a user-friendly way. It is known that there are many prediction tools for microRNA (miRNA) identification that use different programming languages and methods to realize this task. It is difficult to understand each tool and apply it to diverse datasets and organisms available for miRNA analysis. miRQuest can easily be used by biologists and researchers with limited experience with bioinformatics. We built it using the middleware architecture on a Web platform for miRNA research that performs two main functions: i) integration of different miRNA prediction tools for miRNA identification in a user-friendly environment; and ii) comparison of these prediction tools. In both cases, the user provides sequences (in FASTA format) as an input set for the analysis and comparisons. All the tools were selected on the basis of a survey of the literature on the available tools for miRNA prediction. As results, three different cases of use of the tools are also described, where one is the miRNA identification analysis in 30 different species. Finally, miRQuest seems to be a novel and useful tool; and it is freely available for both benchmarking and miRNA identification at http://mirquest.integrativebioinformatics.me/.

  11. EVA: continuous automatic evaluation of protein structure prediction servers.

    Science.gov (United States)

    Eyrich, V A; Martí-Renom, M A; Przybylski, D; Madhusudhan, M S; Fiser, A; Pazos, F; Valencia, A; Sali, A; Rost, B

    2001-12-01

    Evaluation of protein structure prediction methods is difficult and time-consuming. Here, we describe EVA, a web server for assessing protein structure prediction methods, in an automated, continuous and large-scale fashion. Currently, EVA evaluates the performance of a variety of prediction methods available through the internet. Every week, the sequences of the latest experimentally determined protein structures are sent to prediction servers, results are collected, performance is evaluated, and a summary is published on the web. EVA has so far collected data for more than 3000 protein chains. These results may provide valuable insight to both developers and users of prediction methods. http://cubic.bioc.columbia.edu/eva. eva@cubic.bioc.columbia.edu

  12. Web server for the administrative and technical documentation of the radiodiagnostic facilities

    Energy Technology Data Exchange (ETDEWEB)

    Soto, M; Campayo, J. M; Guardia, V. [Logistica y Acondicionamientos Industriales SAU, Sorolla Center, Local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain); Mayo, P., E-mail: m.soto@lainsa.co [TITANIA Servicios Tecnologicos SL, Sorolla Center, Local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain)

    2010-10-15

    Nowadays Radiological Protection Technical Unit of LAINSA as part of Grupo Dominguis, has assigned radiological security tasks in a high number of medical X-ray facilities. It is recognised by the Nuclear Security Council as a specialist in the assessment of protection against the radiological risks associated with medical, industrial and nuclear activities. It is also authorised as an external personal dosimetry centre. Concretely medical X-ray facilities generate big amount of information because of national regulatory authority to assure the good functioning of it. This information is formed by administrative procedures for the regulatory authority in industrial and public health area, periodic quality controls of the radiographic equipment s, radiological verification in different locations to measure the radioactivity levels, certificates of employees training to work with radioactivity, dosimetric registrations of professional exposure employees and medical aptitude documents for their job, etc. In this paper it is presented a net server application to manage this information in an effective way by web. In this server each facility has an online net space with private key access and where there are all the administrative documents and nuclear security reports of the facility. Moreover, the client who is responsible of the radiological security of the centre, can have at any moment all this information, minimizing delay times and optimizing the information store support in electronic format. The objective is that this information can be updated for consulting, modifying or checking at anytime quickly and safety. All this information has to be accessible for the interested medical facility, for the Radiological Protection Technical Unit which has been contracted by the facility to do the assessment in radiological protection and for the regulatory authority in nuclear security to guarantee well-practice in medical and nuclear activities. (Author)

  13. Adaptive proxy map server for efficient vector spatial data rendering

    Science.gov (United States)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  14. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    Science.gov (United States)

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  15. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.

    Science.gov (United States)

    Mader, Malte; Simon, Ronald; Kurtz, Stefan

    2014-03-31

    A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.

  16. WMT: The CSDMS Web Modeling Tool

    Science.gov (United States)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged

  17. Web-Based Course Management and Web Services

    Science.gov (United States)

    Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.

    2004-01-01

    The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…

  18. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server; Nuevo servicio de datos nucleares en CNEA: obtencion de bibliotecas actualizadas desde un Servidor Local

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, Patricia M [Comision Nacional de Energia Atomica, Ezeiza (Argentina). Centro Atomico Ezeiza; Pepe, Maria E [Comision Nacional de Energia Atomica, General San Martin (Argentina). Centro Atomico Constituyentes; Sbaffoni, Maria M [Comision Nacional de Energia Atomica, Buenos Aires (Argentina). Gerencia de Tecnologia

    2000-07-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  19. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    International Nuclear Information System (INIS)

    Harwood, A; Miotto, G Lehmann; Magnoni, L; Vandelli, W; Savu, D

    2012-01-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  20. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    Science.gov (United States)

    Harwood, A.; Lehmann Miotto, G.; Magnoni, L.; Vandelli, W.; Savu, D.

    2012-06-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  1. Cluster Server IPTV dengan Penjadwalan Algoritma Round Robin

    Directory of Open Access Journals (Sweden)

    Didik Aribowo

    2016-03-01

    Full Text Available Perkembangan teknologi informasi yang pesat, otomatis seiring juga dengan meningkatnya para pengguna yang terhubung pada jaringan internet. Berawal dari sebuah single server yang selalu mendapatkan request dari banyak user, perlahan tapi pasti akan terjadi overload dan crash sehingga berdampak pada request yang tidak dapat dilayani oleh single server. Desain arsitektur cluster dapat dibangun dengan menggunakan konsep network load balancing yang memungkinkan proses pengolahan data di share ke dalam beberapa komputer. Dalam penelitian ini menggunakan algoritma penjadwalan round robin sebagai solusi alternatif mengatasi permasalah overload data pada server yang dapat mempengaruhi kinerja sistem IPTV. Untuk  jumlah request yang digunakan dalam penelitian ini adalah 5000, 15000, 25000, dan 50000 request. Dengan metode tersebut, maka performansi algoritma penjawalan dapat diamati dengan menekankan pada parameter sebagai berikut, yaitu throughput, respon time, reply connection, dan error connection sehingga didapatkan algoritma penjadwalan terbaik dalam rangka optimalisasi cluster server IPTV. Secara otomatis dalam proses load balancing mampu mengurangi beban kerja setiap server sehingga tidak ada server yang overload dan memungkinkan server  menggunakan bandwidth  yang tersedia secara lebih efektif serta menyediakan akses yang cepat ke web browser yang dihosting. Implementasi webserver cluster dengan skema load balancing dapat memberikan alvalaibilitas sistem yang tetap terjaga dan skalabilitas yang cukup untuk dapat tetap melayani setiap request dari pengguna

  2. Genonets server-a web server for the construction, analysis and visualization of genotype networks.

    Science.gov (United States)

    Khalid, Fahad; Aguilar-Rodríguez, José; Wagner, Andreas; Payne, Joshua L

    2016-07-08

    A genotype network is a graph in which vertices represent genotypes that have the same phenotype. Edges connect vertices if their corresponding genotypes differ in a single small mutation. Genotype networks are used to study the organization of genotype spaces. They have shed light on the relationship between robustness and evolvability in biological systems as different as RNA macromolecules and transcriptional regulatory circuits. Despite the importance of genotype networks, no tool exists for their automatic construction, analysis and visualization. Here we fill this gap by presenting the Genonets Server, a tool that provides the following features: (i) the construction of genotype networks for categorical and univariate phenotypes from DNA, RNA, amino acid or binary sequences; (ii) analyses of genotype network topology and how it relates to robustness and evolvability, as well as analyses of genotype network topography and how it relates to the navigability of a genotype network via mutation and natural selection; (iii) multiple interactive visualizations that facilitate exploratory research and education. The Genonets Server is freely available at http://ieu-genonets.uzh.ch. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Developing Server-Side Infrastructure for Large-Scale E-Learning of Web Technology

    Science.gov (United States)

    Simpkins, Neil

    2010-01-01

    The growth of E-business has made experience in server-side technology an increasingly important area for educators. Server-side skills are in increasing demand and recognised to be of relatively greater value than comparable client-side aspects (Ehie, 2002). In response to this, many educational organisations have developed E-business courses,…

  4. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  5. Web Based Database Processing for Turkish Navy Officers in USA

    National Research Council Canada - National Science Library

    Ozkan, Gokhan

    2002-01-01

    ...) and details the supporting web server and database server choices, It then presents a prototype of a web-based database system to speed and simplify tracking of academic and personal information...

  6. FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context.

    Science.gov (United States)

    Mader, Malte; Simon, Ronald; Steinbiss, Sascha; Kurtz, Stefan

    2011-07-28

    The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle.

  7. The Resource Manager the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Aleksandrov, Igor; The ATLAS collaboration; Lehmann Miotto, Giovanna; Soloviev, Igor

    2016-01-01

    The Resource Manager of the ATLAS Trigger and Data Acquisition system The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager "server" application and the "client" shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager c...

  8. Size-based scheduling to improve web performance

    NARCIS (Netherlands)

    Harchol-Balter, M.; Schroeder, B.; Bansal, N.; Agrawal, M.

    2003-01-01

    Is it possible to reduce the expected response time of every request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper.This paper proposes a method for improving the performance of web servers servicing static HTTP requests.

  9. Designing sgRNAs with CRISPy web

    DEFF Research Database (Denmark)

    Blin, Kai; Lee, Sang Yup; Weber, Tilmann

    2017-01-01

    Tilmann Weber’s group at the Novo Nordisk Foundation Center for Biosustainability developed a user-friendly, web server implementation of the sgRNA prediction software, CRISPy, for non-computer scientists.......Tilmann Weber’s group at the Novo Nordisk Foundation Center for Biosustainability developed a user-friendly, web server implementation of the sgRNA prediction software, CRISPy, for non-computer scientists....

  10. Justifying the need for forensically ready protocols: A case study of identifying malicious web servers using client honeypots

    Energy Technology Data Exchange (ETDEWEB)

    Seifert, Christian; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.; Komisarczuk, Peter; Muschevici, Radu; Welch, Ian D.

    2008-01-03

    Abstract: Client honeypot technology can find malicious web servers that attack web browsers and push malware, so called drive-by-downloads, to the client machine. Merely recording the network traffic is insufficient to perform an efficient forensic analysis of the attack. Custom tools need to be developed to access and examine the embedded data of the network protocols. Once the information is extracted from the network data, it cannot be used to perform a behavioral analysis on the attack, therefore limiting the ability to answer what exactly happened on the attacked system. Implementation of a record/ replay mechanism is proposed that allows the forensic examiner to easily extract application data from recorded network streams and allows applications to interact with such data for behavioral analysis purposes. A concrete implementation of such a setup for HTTP and DNS protocols using the HTTP proxy Squid and DNS proxy pdnsd is presented and its effect on digital forensic analysis demonstrated.

  11. New format for ATLAS e-news

    CERN Multimedia

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  12. Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems

    Science.gov (United States)

    Ponyik, Joseph G.; York, David W.

    2002-01-01

    Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.

  13. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    International Nuclear Information System (INIS)

    Chai, X; Liu, L; Xing, L

    2014-01-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  14. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chai, X; Liu, L; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States)

    2014-06-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  15. An Atlas of annotations of Hydra vulgaris transcriptome.

    Science.gov (United States)

    Evangelista, Daniela; Tripathi, Kumar Parijat; Guarracino, Mario Rosario

    2016-09-22

    RNA sequencing takes advantage of the Next Generation Sequencing (NGS) technologies for analyzing RNA transcript counts with an excellent accuracy. Trying to interpret this huge amount of data in biological information is still a key issue, reason for which the creation of web-resources useful for their analysis is highly desiderable. Starting from a previous work, Transcriptator, we present the Atlas of Hydra's vulgaris, an extensible web tool in which its complete transcriptome is annotated. In order to provide to the users an advantageous resource that include the whole functional annotated transcriptome of Hydra vulgaris water polyp, we implemented the Atlas web-tool contains 31.988 accesible and downloadable transcripts of this non-reference model organism. Atlas, as a freely available resource, can be considered a valuable tool to rapidly retrieve functional annotation for transcripts differentially expressed in Hydra vulgaris exposed to the distinct experimental treatments. WEB RESOURCE URL: http://www-labgtp.na.icar.cnr.it/Atlas .

  16. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  17. The monitoring and calibration Web system of the ATLAS hadronic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, Carmen; Gomes, Andressa Andrea Sivollela; Marroquim, Fernando [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2011-07-01

    Full text: The scintillator tiles hadronic calorimeter (TileCal) of the ATLAS detector measures the energy of resultant particles in a collision. The calorimetry system was designed to absorb the energy of the particles that crosses the detector and is composed by three barrels, each one equally divided into 64 modules. The ionizing particles that cross the tiles induce the production of light, which intensity is proportional to the energy deposited by the fragment. The produced light propagates through the tiles towards the edges, where it is absorbed and displaced until reaching the photomultiplier tubes (PMTs), also known as electronic reading channels. Each module combines till 45 PMTs. For each run, the reconstruction process starts with a data analysis that can comprises different levels of information granularity till arriving to the PMTs level. Following this phase, the Data Quality Monitoring Framework (DQMF) system automatically generates quality indicators associated to the channels. Depending on the configuration that is registered in the DQMF, the channel status can be automatically defined as good, affected or bad. The status of each module is defined by the percentage of existing good, affected or bad channels. At this point, the analysis of modules allows the identification of the ones that are problematic by the examination of graphics that are automatically generated during the data reconstruction stage. Then, an analysis of a module performance by a time period that encompasses different types of runs is performed. In this last step, the list of problematic channels can be modified through the insertion or exclusion of PTMs, as in the case when a channel is substituted. Additionally, during the whole calorimeter operation, it is fundamental to identify the electronic channels that are active, dead (nor working), noisy and the ones that presents saturation in the signal digitalisation process. The Monitoring and Calibration Web System (MCWS) was

  18. The monitoring and calibration Web system of the ATLAS hadronic calorimeter

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Gomes, Andressa Andrea Sivollela; Marroquim, Fernando

    2011-01-01

    Full text: The scintillator tiles hadronic calorimeter (TileCal) of the ATLAS detector measures the energy of resultant particles in a collision. The calorimetry system was designed to absorb the energy of the particles that crosses the detector and is composed by three barrels, each one equally divided into 64 modules. The ionizing particles that cross the tiles induce the production of light, which intensity is proportional to the energy deposited by the fragment. The produced light propagates through the tiles towards the edges, where it is absorbed and displaced until reaching the photomultiplier tubes (PMTs), also known as electronic reading channels. Each module combines till 45 PMTs. For each run, the reconstruction process starts with a data analysis that can comprises different levels of information granularity till arriving to the PMTs level. Following this phase, the Data Quality Monitoring Framework (DQMF) system automatically generates quality indicators associated to the channels. Depending on the configuration that is registered in the DQMF, the channel status can be automatically defined as good, affected or bad. The status of each module is defined by the percentage of existing good, affected or bad channels. At this point, the analysis of modules allows the identification of the ones that are problematic by the examination of graphics that are automatically generated during the data reconstruction stage. Then, an analysis of a module performance by a time period that encompasses different types of runs is performed. In this last step, the list of problematic channels can be modified through the insertion or exclusion of PTMs, as in the case when a channel is substituted. Additionally, during the whole calorimeter operation, it is fundamental to identify the electronic channels that are active, dead (nor working), noisy and the ones that presents saturation in the signal digitalisation process. The Monitoring and Calibration Web System (MCWS) was

  19. Bringing it All Together: NODC's Geoportal Server as an Integration Tool for Interoperable Data Services

    Science.gov (United States)

    Casey, K. S.; Li, Y.

    2011-12-01

    The US National Oceanographic Data Center (NODC) has implemented numerous interoperable data technologies in recent years to enhance the discovery, understanding, and use of the vast quantities of data in the NODC archives. These services include OPeNDAP's Hyrax server, Unidata's THREDDS Data Server (TDS), NOAA's Live Access Server (LAS), and most recently the ESRI ArcGIS Server. Combined, these technologies enable NODC to provide access to its data holdings and products through most of the commonly-used standardized web services like the Data Access Protocol (DAP) and the Open Geospatial Consortium suite of services such as the Web Mapping Service (WMS) and Web Coverage Service (WCS). Despite the strong demand for and use of these services, the acronym-rich environment of services can also result in confusion for producers of data to the NODC archives, for consumers of data from the NODC archives, and for the data stewards at the archives as well. The situation is further complicated by the fact that NODC also maintains some ad hoc services like WODselect, and that not all services can be applied to all of the tens of thousands of collections in the NODC archive; where once every data set was available only through FTP and HTTP servers, now many are also available from the LAS, TDS, Hyrax, and ArcGIS Server. To bring order and clarity to this potentially confusing collection of services, NODC deployed the Geoportal Server into its Archive Management System as an integrating technology that brings together its various data access, visualization, and discovery services as well as its overall metadata management workflows. While providing an enhanced web-based interface for more integrated human-to-machine discovery and access, the deployment also enables NODC for the first time to support a robust set of machine-to-machine discovery services such as the Catalog Service for the Web (CS/W), OpenSearch, and Search and Retrieval via URL (SRU) . This approach allows NODC

  20. The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

    CERN Document Server

    Magnoni, L; The ATLAS collaboration; Sloper, J E

    2011-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing environment composed by thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed by two main entities, that reflect the twofold scope of the application. The fi...

  1. The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

    CERN Document Server

    Magnoni, L; Sloper, J E

    2010-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing environment composed by thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed by two main entities, that reflect the twofold scope of the application. The fi...

  2. osFP: a web server for predicting the oligomeric states of fluorescent proteins.

    Science.gov (United States)

    Simeon, Saw; Shoombuatong, Watshara; Anuwongcharoen, Nuttapat; Preeyanon, Likit; Prachayasittikul, Virapong; Wikberg, Jarl E S; Nantasenamat, Chanin

    2016-01-01

    Currently, monomeric fluorescent proteins (FP) are ideal markers for protein tagging. The prediction of oligomeric states is helpful for enhancing live biomedical imaging. Computational prediction of FP oligomeric states can accelerate the effort of protein engineering efforts of creating monomeric FPs. To the best of our knowledge, this study represents the first computational model for predicting and analyzing FP oligomerization directly from the amino acid sequence. After data curation, an exhaustive data set consisting of 397 non-redundant FP oligomeric states was compiled from the literature. Results from benchmarking of the protein descriptors revealed that the model built with amino acid composition descriptors was the top performing model with accuracy, sensitivity and specificity in excess of 80% and MCC greater than 0.6 for all three data subsets (e.g. training, tenfold cross-validation and external sets). The model provided insights on the important residues governing the oligomerization of FP. To maximize the benefit of the generated predictive model, it was implemented as a web server under the R programming environment. osFP affords a user-friendly interface that can be used to predict the oligomeric state of FP using the protein sequence. The advantage of osFP is that it is platform-independent meaning that it can be accessed via a web browser on any operating system and device. osFP is freely accessible at http://codes.bio/osfp/ while the source code and data set is provided on GitHub at https://github.com/chaninn/osFP/.Graphical Abstract.

  3. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  4. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1997-03-01

    This report covers the following topics: (1) status of the ATLAS accelerator; (2) progress in R and D towards a proposal for a National ISOL Facility; (3) highlights of recent research at ATLAS; (4) the move of gammasphere from LBNL to ANL; (5) Accelerator Target Development laboratory; (6) Program Advisory Committee; (7) ATLAS User Group Executive Committee; and (8) ATLAS user handbook available in the World Wide Web. A brief summary is given for each topic

  5. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    CERN Document Server

    Sivolella, A; The ATLAS collaboration; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal), one of the ATLAS detectors, has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its performance, presenting the problematic known channels list from the official database that stores the detector conditions data (COOL). The bad channels list guides the data quality validator during analyses in order to identify new problematic channels. Through the system, it is also possible to update the channels list directly in the COOL database. MCWS generates results, as eta-phi plots and comparative tables with masked channels percentage, which concerns TileCal status, and it is accessible by all ATLAS collaboration. Annually, there is an intervention on LHC (Large Hadronic Collider) when the detector equipments (P...

  6. KFC Server: interactive forecasting of protein interaction hot spots.

    Science.gov (United States)

    Darnell, Steven J; LeGault, Laura; Mitchell, Julie C

    2008-07-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.

  7. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  8. The ATLAS PanDA Monitoring System and its Evolution

    International Nuclear Information System (INIS)

    Klimentov, A; Nevski, P; Wenaus, T; Potekhin, M

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  9. MORPH-PRO: a novel algorithm and web server for protein morphing.

    Science.gov (United States)

    Castellana, Natalie E; Lushnikov, Andrey; Rotkiewicz, Piotr; Sefcovic, Natasha; Pevzner, Pavel A; Godzik, Adam; Vyatkina, Kira

    2013-07-11

    Proteins are known to be dynamic in nature, changing from one conformation to another while performing vital cellular tasks. It is important to understand these movements in order to better understand protein function. At the same time, experimental techniques provide us with only single snapshots of the whole ensemble of available conformations. Computational protein morphing provides a visualization of a protein structure transitioning from one conformation to another by producing a series of intermediate conformations. We present a novel, efficient morphing algorithm, Morph-Pro based on linear interpolation. We also show that apart from visualization, morphing can be used to provide plausible intermediate structures. We test this by using the intermediate structures of a c-Jun N-terminal kinase (JNK1) conformational change in a virtual docking experiment. The structures are shown to dock with higher score to known JNK1-binding ligands than structures solved using X-Ray crystallography. This experiment demonstrates the potential applications of the intermediate structures in modeling or virtual screening efforts. Visualization of protein conformational changes is important for characterization of protein function. Furthermore, the intermediate structures produced by our algorithm are good approximations to true structures. We believe there is great potential for these computationally predicted structures in protein-ligand docking experiments and virtual screening. The Morph-Pro web server can be accessed at http://morph-pro.bioinf.spbau.ru.

  10. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    International Nuclear Information System (INIS)

    McKee, Shawn; Lake, Andrew; Laurens, Philippe; Severini, Horst; Wlodek, Tomasz; Wolff, Stephen; Zurawski, Jason

    2012-01-01

    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is routinely including the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could mean critical delays in the overall scientific progress of distributed data-intensive experiments like ATLAS. Network operations staff routinely must deal with problems deep in the infrastructure; this may be as benign as replacing a failing piece of equipment, or as complex as dealing with a multi-domain path that is experiencing data loss. In either case, it is crucial that effective monitoring and performance analysis tools are available to ease the burden of management. We will report on our experiences deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. The US ATLAS collaboration has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  11. Web server of the Centre for Photonuclear Experiments Data of the Scientific Research Institute for Nuclear Physics, Moscow State University: Hypertext version of the nuclear physics database

    Energy Technology Data Exchange (ETDEWEB)

    Boboshin, I N; Varlamov, A V; Varlamov, V V; Rudenko, D S; Stepanov, M E [D.V. Skobel' tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, Centre for Photonuclear Experiments Data (Russian Federation)

    2001-02-01

    The nuclear databases which have been developed at the Centre for Photonuclear Experiments Data of the D.V. Skobel'tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, and put on the Centre's web server, are presented. The possibilities for working with these databases on the Internet are described. (author)

  12. Web server of the Centre for Photonuclear Experiments Data of the Scientific Research Institute for Nuclear Physics, Moscow State University: Hypertext version of the nuclear physics database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, A.V.; Varlamov, V.V.; Rudenko, D.S.; Stepanov, M.E.

    2001-01-01

    The nuclear databases which have been developed at the Centre for Photonuclear Experiments Data of the D.V. Skobel'tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, and put on the Centre's web server, are presented. The possibilities for working with these databases on the Internet are described. (author)

  13. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  14. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  15. Tierless Web programming in ML

    OpenAIRE

    Radanne , Gabriel

    2017-01-01

    Eliom is a dialect of OCaml for Web programming in which server and client pieces of code can be mixed in the same file using syntactic annotations. This allows to build a whole application as a single distributed program, in which it is possible to define in a composable way reusable widgets with both server and client behaviors. Eliom is type-safe, as it ensures that communications are well-behaved through novel language constructs that match the specificity of Web programming. Eliom is als...

  16. Multimedia medical data archive and retrieval server on the Internet

    Science.gov (United States)

    Komo, Darmadi; Levine, Betty A.; Freedman, Matthew T.; Mun, Seong K.; Tang, Y. K.; Chiang, Ted T.

    1997-05-01

    The Multimedia Medical Data Archive and Retrieval Server has been installed at the imaging science and information systems (ISIS) center in Georgetown University Medical Center to provide medical data archive and retrieval support for medical researchers. The medical data includes text, images, sound, and video. All medical data is keyword indexed using a database management system and placed temporarily in a staging area and then transferred to a StorageTek one terabyte tape library system with a robotic arm for permanent archive. There are two methods of interaction with the system. The first method is to use a web browser with HTML functions to perform insert, query, update, and retrieve operations. These generate dynamic SQL calls to the database and produce StorageTek API calls to the tape library. The HTML functions consist of a database, StorageTek interface, HTTP server, common gateway interface, and Java programs. The second method is to issue a DICOM store command, which is translated by the system's DICOM server to SQL calls and then produce StorageTek API calls to the tape library. The system performs as both an Internet and a DICOM server using standard protocols such as HTTP, HTML, Java, and DICOM. Users with proper authentication can log on to the server from anywhere on the Internet using a standard web browser resulting in a user-friendly, open environment, and platform independent solution for archiving multimedia medical data. It represents a complex integration of different components including a robotic tape storage system, database, user-interface, WWW protocols, and TCP/IP networking. The user will only deal with the WWW and DICOM server components of the system, the database and robotic tape library system are transparent and the user will not know that the medical data is stored on magnetic tapes. The server provides the researchers a cost-effective tool for archiving and retrieving medical data across a TCP/IP network environment. It will

  17. User-Level QoS-Adaptive Resource Management in Server End-Systems

    National Research Council Canada - National Science Library

    Abdelzaher, Tarek F; Shin, Kang G; Bhatti, Nina

    2003-01-01

    Proliferation of QoS-sensitive client-server Internet applications such as high-quality audio, video-on-demand, e-commerce, and commercial web hosting has generated an impetus to provide performance guarantees...

  18. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    Science.gov (United States)

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Pathview Web: user friendly pathway visualization and data integration.

    Science.gov (United States)

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. A Dynamic Extension of ATLAS Run Query Service

    CERN Document Server

    Buliga, Alexandru

    2015-01-01

    The ATLAS RunQuery is a primarily web-based service for the ATLAS community to access meta information about the data taking in a concise format. In order to provide a better user experience, the service was moved to use a new technology, involving concepts such as: Web Sockets, on demand data, client-side scripting, memory caching and parallelizing execution.

  1. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  2. AllerTool: a web server for predicting allergenicity and allergic cross-reactivity in proteins.

    Science.gov (United States)

    Zhang, Zong Hong; Koh, Judice L Y; Zhang, Guang Lan; Choo, Khar Heng; Tammi, Martti T; Tong, Joo Chuan

    2007-02-15

    Assessment of potential allergenicity and patterns of cross-reactivity is necessary whenever novel proteins are introduced into human food chain. Current bioinformatic methods in allergology focus mainly on the prediction of allergenic proteins, with no information on cross-reactivity patterns among known allergens. In this study, we present AllerTool, a web server with essential tools for the assessment of predicted as well as published cross-reactivity patterns of allergens. The analysis tools include graphical representation of allergen cross-reactivity information; a local sequence comparison tool that displays information of known cross-reactive allergens; a sequence similarity search tool for assessment of cross-reactivity in accordance to FAO/WHO Codex alimentarius guidelines; and a method based on support vector machine (SVM). A 10-fold cross-validation results showed that the area under the receiver operating curve (A(ROC)) of SVM models is 0.90 with 86.00% sensitivity (SE) at specificity (SP) of 86.00%. AllerTool is freely available at http://research.i2r.a-star.edu.sg/AllerTool/.

  3. Opal web services for biomedical applications.

    Science.gov (United States)

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  4. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    Science.gov (United States)

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  5. Integration of ROOT Notebooks as an ATLAS analysis web-based tool in outreach and public data release

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2016-01-01

    The integration of the ROOT data analysis framework with the Jupyter Notebook technology presents an incredible potential in the enhance and expansion of educational and training programs: starting from university students in their early years, passing to new ATLAS PhD students and post doctoral researchers, to those senior analysers and professors that want to restart their contact with the analysis of data or to include a more friendly but yet very powerful open source tool in the classroom. Such tools have been already tested in several environments and a fully web-based integration together with Open Access Data repositories brings the possibility to go a step forward in the search of ATLAS for integration between several CERN projects in the field of the education and training, developing new computing solutions on the way.

  6. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Magnoni, L; Vandelli, W; Savu, D

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, ...

  7. ADAM Project – A generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Lehmann Miotto, G

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers, to the network utilization are stored in several databases for a posterior analysis. Although the ability to view these data-sets individually is already in place, there currently is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple diversely structured providers. It is capable of aggregating and correlating the data according to user defined criteria. Finally it v...

  8. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  9. Continuous Integration in PHP web applications development

    OpenAIRE

    Hujer, Martin

    2011-01-01

    This work deals with continuous integration of web applications, especially those in PHP language. The main objective is the selection of the server for continuous integration, its deployment and configuration for continuous integration of PHP web applications. The first chapter describes the concept of continuous integration and its individual techniques. The second chapter deals with the choice of server for continuous integration and its basic settings. The third chapter contains an overvi...

  10. Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children

    Science.gov (United States)

    Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.

    2008-03-01

    Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.

  11. Oracle announces increased uptake of Oracle9i Application Server

    CERN Multimedia

    2002-01-01

    Oracle Europe this week announced that increasingly, companies in the region are selecting the Oracle9i Application Server (Oracle9iAS) to develop and deploy web-based business application. CERN is one of its customers (1/2 page).

  12. Educational use of World Wide Web pages on CD-ROM.

    Science.gov (United States)

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  13. Beginning PHP, Apache, MySQL web development

    CERN Document Server

    Glass, Michael K; Naramore, Elizabeth; Mailer, Gary; Stolz, Jeremy; Gerner, Jason

    2004-01-01

    An ideal introduction to the entire process of setting up a Web site using PHP (a scripting language), MySQL (a database management system), and Apache (a Web server)* Programmers will be up and running in no time, whether they're using Linux or Windows servers* Shows readers step by step how to create several Web sites that share common themes, enabling readers to use these examples in real-world projects* Invaluable reading for even the experienced programmer whose current site has outgrown the traditional static structure and who is looking for a way to upgrade to a more efficient, user-f

  14. EzMol: A Web Server Wizard for the Rapid Visualization and Image Production of Protein and Nucleic Acid Structures.

    Science.gov (United States)

    Reynolds, Christopher R; Islam, Suhail A; Sternberg, Michael J E

    2018-01-31

    EzMol is a molecular visualization Web server in the form of a software wizard, located at http://www.sbg.bio.ic.ac.uk/ezmol/. It is designed for easy and rapid image manipulation and display of protein molecules, and is intended for users who need to quickly produce high-resolution images of protein molecules but do not have the time or inclination to use a software molecular visualization system. EzMol allows the upload of molecular structure files in PDB format to generate a Web page including a representation of the structure that the user can manipulate. EzMol provides intuitive options for chain display, adjusting the color/transparency of residues, side chains and protein surfaces, and for adding labels to residues. The final adjusted protein image can then be downloaded as a high-resolution image. There are a range of applications for rapid protein display, including the illustration of specific areas of a protein structure and the rapid prototyping of images. Copyright © 2018. Published by Elsevier Ltd.

  15. Forecasting of interaction between bee propolis and protective antigenic domain in anthrax using the software and bioinformatics web servers

    Directory of Open Access Journals (Sweden)

    Elmira Mohammadi

    2017-01-01

    Full Text Available Background: Protective antigen of anthrax toxin, after touching the cell receptors, plays an important role in the pathogenesis of toxin. The purpose of this study was to investigate the interaction of anthrax toxin protective antigen and four great combination propolis included caffeic acid, benzyl caffeate, cinnamic acid and kaempferol using the softwares and bioinformatics web servers. Methods: Three-dimensional structure of protective antigen (receptor obtains from Protein Data Bank (PDB. Four of the main components from propolis were selected          as ligand and their 3D-structures were obtained from ChemSpider and ZINC     compound database. The interaction of each ligand and receptor was assessed                   by SwissDock server (http://www.swissdock.ch/ and BSP-SLIM server (http://zhanglab.ccmb.med.umich.edu/BSP-SLIM. Docking results appears with Fullfitness numbers (in kcal/mol. Identification of amino acids involved in ligand and receptor interaction, was performed using the Chimera software; UCSF Chimera program (http://www.cgl.ucsf.edu/. Results: The results of interaction between propolis components and protective antigen by BSP-SLIM server showed that the most interaction was related with benzyl caffeate, caffeic acid, kaempferol and cinnamic acid, respectively. Results for the desired ligand Interaction with protective antigen genes using SwissDock server showed that the caffeic acid had ΔG equals -9.10 kcal/mol and FullFitness equal to -993.16 kcal/mol respectively. The analysis of interaction between ligands with amino-acids of protective antigen indicated that the interaction of Caffeic acid whit Glutamic acid 117 had energy -15.5429 kcal/mol. Conclusion: Finding strong and safe inhibitors for anthrax toxin is very useful method for inhibiting its toxicity to cell. In this study the binding ability of four flavonoids to protective antigen was studied. Glutamic acid 117 is very effective

  16. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  17. Monitoring and controlling ATLAS data management: The Rucio web user interface

    Science.gov (United States)

    Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.

  18. A Web Based Financial and Accounting Software Application

    Directory of Open Access Journals (Sweden)

    Doru E. TILIUTE

    2010-01-01

    Full Text Available The Client-server applications become more attractivein comparison with their counterpart desktop-type due to someincontestable advantages. Among the client-server applicationssome uses the Web environment providing full access fromanywhere and anytime to all application features. The presentwork presents the fist results in the achievement of a web basedfinancial and accounting application using open-sourcestechnologies and programming languages (Apache, MySQL,PHP and JavaScript

  19. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    Science.gov (United States)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google

  20. QuadBase2: web server for multiplexed guanine quadruplex mining and visualization

    Science.gov (United States)

    Dhapola, Parashar; Chowdhury, Shantanu

    2016-01-01

    DNA guanine quadruplexes or G4s are non-canonical DNA secondary structures which affect genomic processes like replication, transcription and recombination. G4s are computationally identified by specific nucleotide motifs which are also called putative G4 (PG4) motifs. Despite the general relevance of these structures, there is currently no tool available that can allow batch queries and genome-wide analysis of these motifs in a user-friendly interface. QuadBase2 (quadbase.igib.res.in) presents a completely reinvented web server version of previously published QuadBase database. QuadBase2 enables users to mine PG4 motifs in up to 178 eukaryotes through the EuQuad module. This module interfaces with Ensembl Compara database, to allow users mine PG4 motifs in the orthologues of genes of interest across eukaryotes. PG4 motifs can be mined across genes and their promoter sequences in 1719 prokaryotes through ProQuad module. This module includes a feature that allows genome-wide mining of PG4 motifs and their visualization as circular histograms. TetraplexFinder, the module for mining PG4 motifs in user-provided sequences is now capable of handling up to 20 MB of data. QuadBase2 is a comprehensive PG4 motif mining tool that further expands the configurations and algorithms for mining PG4 motifs in a user-friendly way. PMID:27185890

  1. An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment

    CERN Document Server

    Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno; Abreu Da Silva, Igor

    2015-01-01

    The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from supporting the process of publishing scientific papers to monitoring radiation levels in the equipment at the cave, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. Fence assembles classes to build applications by making extensive use of JSON configuration files. It relies vastly on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that vi...

  2. An object-oriented approach to deploying highly configurable web interfaces for the ATLAS experiment

    CERN Document Server

    Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno

    2015-01-01

    In order to manage a heterogeneous and worldwide collaboration, the ATLAS experiment develops web systems that range from supporting the process of publishing scientific papers to monitoring equipment radiation levels. These systems are vastly supported by Glance, a technology that was set forward in 2004 to create an abstraction layer on top of varied databases that automatically recognizes their modeling and generate web search interfaces. Fence (Front ENd ENgine for glaNCE) assembles classes to build applications by making extensive use of configuration files. It produces templates of the core JSON files on top of which it is possible to create Glance-compliant search interfaces. Once the database, its schemas and tables are defined using Glance, its records can be incorporated into the templates by escaping the returned values with a reference to the column identifier wrapped around double enclosing brackets. The developer may also expand on available configuration files to create HTML forms and securely ...

  3. The detector control web system of the ATLAS hadronic calorimeter

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Ferreira, Fernando G.; Marroquim, Fernando

    2011-01-01

    Full text: The hadronic calorimeter (TileCal) of the ATLAS experiment is a sampling device for measuring the energy of particles that cross the detector and is composed by thousands of electronics channels operating over a high rate of acquired events. A complex sourcing mechanism, responsible for powering each channel, comprises low voltages, from 3 V to 15 V, and high voltage, around 800 V, power supplies and a water-based cooling system. The Detector Control System (DCS) is responsible for monitoring and controlling the mechanisms. The good operation of power supplies is really important for the detector data acquisition. A misbehaved power supply can affect the electronic systems or, even in the worst scenario, turn a whole section of the detector off, which would lead to missing events. DCS Web System was developed to provide the required functions to monitor the stability of the power supplies operation by providing a daily or monthly summary of voltages, currents and temperatures. The synopsis is made up by the mean and standard variation of the monitored parameters as well as time plots. The obtained statistics are compared to preset thresholds and the system interface highlight the cases that the collaboration should pay attention. The web system also displays voltage trips, an undesired power-cut that can happen from time to time in some power supplies during their operation. As future steps, the group is developing prediction capabilities based on the analysis of the time series of the monitored parameters. Therefore, it will be possible to indicate which power sources should be replaced during the annual maintenance period, helping to keep a high number of live channels during the data acquisition. This paper describes the DCS Web System and its functionalities, presenting preliminary results from the time series analysis. (author)

  4. ATLAS Open Data project

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The current ATLAS model of Open Access to recorded and simulated data offers the opportunity to access datasets with a focus on education, training and outreach. This mandate supports the creation of platforms, projects, software, and educational products used all over the planet. We describe the overall status of ATLAS Open Data (http://opendata.atlas.cern) activities, from core ATLAS activities and releases to individual and group efforts, as well as educational programs, and final web or software-based (and hard-copy) products that have been produced or are under development. The relatively large number and heterogeneous use cases currently documented is driving an upcoming release of more data and resources for the ATLAS Community and anyone interested to explore the world of experimental particle physics and the computer sciences through data analysis.

  5. The semantic web in an SMS

    NARCIS (Netherlands)

    Valkering, Onno; de Boer, Victor; Lô, Gossa; Blankendaal, Romy; Schlobach, Stefan

    2016-01-01

    Many ICT applications and services, including those from the Semantic Web, rely on the Web for the exchange of data. This includes expensive server and network infrastructures. Most rural areas of developing countries are not reached by the Web and its possibilities, while at the same time the

  6. The control software framework of the web base

    International Nuclear Information System (INIS)

    Nakatani, Takeshi; Inamura, Yasuhiro; Ito, Takayoshi; Otomo, Toshiya

    2015-01-01

    Web browsers are one of the most platform-independent user interfaces. In particular, web pages created using responsive web design (RWD) are available for use on desktop and laptop computers, as well as tablet terminals and smart phones. We developed a common software framework, IROHA, for the instrument control system in the Materials and Life Science Experimental Facility at the Japan Proton Accelerator Research Complex to build a flexible and scalable system by adopting XML/HTTP. However, its user interface was platform-dependent, and we wanted it to be more user-friendly. In 2013, we developed the prototype of a new software framework, IROHA2, comprising several device control servers and an instrument management server, retaining the flexibility and scalability of IROHA. We also adopted the Bootstrap framework to create an RWD user interface for these servers. (author)

  7. PoPMuSiC 2.1: a web server for the estimation of protein stability changes upon mutation and sequence optimality

    Directory of Open Access Journals (Sweden)

    Rooman Marianne

    2011-05-01

    Full Text Available Abstract Background The rational design of modified proteins with controlled stability is of extreme importance in a whole range of applications, notably in the biotechnological and environmental areas, where proteins are used for their catalytic or other functional activities. Future breakthroughs in medical research may also be expected from an improved understanding of the effect of naturally occurring disease-causing mutations on the molecular level. Results PoPMuSiC-2.1 is a web server that predicts the thermodynamic stability changes caused by single site mutations in proteins, using a linear combination of statistical potentials whose coefficients depend on the solvent accessibility of the mutated residue. PoPMuSiC presents good prediction performances (correlation coefficient of 0.8 between predicted and measured stability changes, in cross validation, after exclusion of 10% outliers. It is moreover very fast, allowing the prediction of the stability changes resulting from all possible mutations in a medium size protein in less than a minute. This unique functionality is user-friendly implemented in PoPMuSiC and is particularly easy to exploit. Another new functionality of our server concerns the estimation of the optimality of each amino acid in the sequence, with respect to the stability of the structure. It may be used to detect structural weaknesses, i.e. clusters of non-optimal residues, which represent particularly interesting sites for introducing targeted mutations. This sequence optimality data is also expected to have significant implications in the prediction and the analysis of particular structural or functional protein regions. To illustrate the interest of this new functionality, we apply it to a dataset of known catalytic sites, and show that a much larger than average concentration of structural weaknesses is detected, quantifying how these sites have been optimized for function rather than stability. Conclusion The

  8. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  9. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    Science.gov (United States)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  10. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    International Nuclear Information System (INIS)

    Stepanov, Sergey

    2013-01-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  11. StaRProtein, A Web Server for Prediction of the Stability of Repeat Proteins

    Science.gov (United States)

    Xu, Yongtao; Zhou, Xu; Huang, Meilan

    2015-01-01

    Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins. PMID:25807112

  12. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    CERN Document Server

    McKee, S; The ATLAS collaboration; Laurens, P; Severini, H; Wlodek, T; Wolff, S; Zurawski, J

    2012-01-01

    We will present our motivations for deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States and describe our experience in using it. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. USATLAS has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  13. A JEE RESTful service to access Conditions Data in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; Gallas, Elizabeth

    2015-01-01

    Usage of Conditions Data in ATLAS is extensive for offline reconstruction and analysis (e.g.: alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemata (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemata at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via RESTful services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemata via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browser...

  14. A JEE RESTful service to access Conditions Data in ATLAS

    Science.gov (United States)

    Formica, Andrea; Gallas, E. J.

    2015-12-01

    Usage of condition data in ATLAS is extensive for offline reconstruction and analysis (e.g. alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via REST services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemas via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browsers).

  15. The ATLAS detector control system

    International Nuclear Information System (INIS)

    Schlenker, S.; Arfaoui, S.; Franz, S.

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of more that 130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 10 6 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. First, this contribution describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined. (authors)

  16. The ATLAS Detector Control System

    CERN Document Server

    Schlenker, S; Kersten, S; Hirschbuehl, D; Braun, H; Poblaguev, A; Oliveira Damazio, D; Talyshev, A; Zimmermann, S; Franz, S; Gutzwiller, O; Hartert, J; Mindur, B; Tsarouchas, CA; Caforio, D; Sbarra, C; Olszowska, J; Hajduk, Z; Banas, E; Wynne, B; Robichaud-Veronneau, A; Nemecek, S; Thompson, PD; Mandic, I; Deliyergiyev, M; Polini, A; Kovalenko, S; Khomutnikov, V; Filimonov, V; Bindi, M; Stanecka, E; Martin, T; Lantzsch, K; Hoffmann, D; Huber, J; Mountricha, E; Santos, HF; Ribeiro, G; Barillari, T; Habring, J; Arabidze, G; Boterenbrood, H; Hart, R; Marques Vinagre, F; Lafarguette, P; Tartarelli, GF; Nagai, K; D'Auria, S; Chekulaev, S; Phillips, P; Ertel, E; Brenner, R; Leontsinis, S; Mitrevski, J; Grassi, V; Karakostas, K; Iakovidis, G.; Marchese, F; Aielli, G

    2011-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of >130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years an...

  17. Development of Sales and Inventory Workflow Management Information System Web Portal for Petrospan Integrated Services, Eket, Akwa Ibom State, Nigeria

    OpenAIRE

    Ezeonwumelu, Adanna Ngozi; Eunice, Akinloye Bolanle; Ezenugu, Isaac A.

    2017-01-01

    In this paper, the development of Sales and Inventory Workflow Management Information System (SIWfMS) web portal for Petrospan Integrated Services, Eket, Akwa Ibom state, Nigeria was presented. Rapid Application Development (RAP) methodology is used in the web application development. Three-tier architecture based on WAMP server configuration was adopted. The WAMP server was made up of Windows Operating system; Apache web server, MySQL database system and PHP server-side scripting langue. The...

  18. Towards Big Earth Data Analytics: The EarthServer Approach

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.

  19. LECTINPred: web Server that Uses Complex Networks of Protein Structure for Prediction of Lectins with Potential Use as Cancer Biomarkers or in Parasite Vaccine Design.

    Science.gov (United States)

    Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto

    2014-04-01

    Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. LIBP-Pred: web server for lipid binding proteins using structural network parameters; PDB mining of human cancer biomarkers and drug targets in parasites and bacteria.

    Science.gov (United States)

    González-Díaz, Humberto; Munteanu, Cristian R; Postelnicu, Lucian; Prado-Prado, Francisco; Gestal, Marcos; Pazos, Alejandro

    2012-03-01

    Lipid-Binding Proteins (LIBPs) or Fatty Acid-Binding Proteins (FABPs) play an important role in many diseases such as different types of cancer, kidney injury, atherosclerosis, diabetes, intestinal ischemia and parasitic infections. Thus, the computational methods that can predict LIBPs based on 3D structure parameters became a goal of major importance for drug-target discovery, vaccine design and biomarker selection. In addition, the Protein Data Bank (PDB) contains 3000+ protein 3D structures with unknown function. This list, as well as new experimental outcomes in proteomics research, is a very interesting source to discover relevant proteins, including LIBPs. However, to the best of our knowledge, there are no general models to predict new LIBPs based on 3D structures. We developed new Quantitative Structure-Activity Relationship (QSAR) models based on 3D electrostatic parameters of 1801 different proteins, including 801 LIBPs. We calculated these electrostatic parameters with the MARCH-INSIDE software and they correspond to the entire protein or to specific protein regions named core, inner, middle, and surface. We used these parameters as inputs to develop a simple Linear Discriminant Analysis (LDA) classifier to discriminate 3D structure of LIBPs from other proteins. We implemented this predictor in the web server named LIBP-Pred, freely available at , along with other important web servers of the Bio-AIMS portal. The users can carry out an automatic retrieval of protein structures from PDB or upload their custom protein structural models from their disk created with LOMETS server. We demonstrated the PDB mining option performing a predictive study of 2000+ proteins with unknown function. Interesting results regarding the discovery of new Cancer Biomarkers in humans or drug targets in parasites have been discussed here in this sense.

  1. ATLAS Maintenance and Operation management system

    CERN Document Server

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  2. Node web development

    CERN Document Server

    Herron, David

    2013-01-01

    Presented in a simple, step-by-step format, this book is an introduction to web development with Node.This book is for anybody looking for an alternative to the ""P"" languages (Perl, PHP, Python), or anyone looking for a new paradigm of server-side application development.The reader should have at least a rudimentary understanding of JavaScript and web application development.

  3. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  4. Molecular structure input on the web

    Directory of Open Access Journals (Sweden)

    Ertl Peter

    2010-02-01

    Full Text Available Abstract A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential. The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  5. Seq2Ref: a web server to facilitate functional interpretation

    Directory of Open Access Journals (Sweden)

    Li Wenlin

    2013-01-01

    Full Text Available Abstract Background The size of the protein sequence database has been exponentially increasing due to advances in genome sequencing. However, experimentally characterized proteins only constitute a small portion of the database, such that the majority of sequences have been annotated by computational approaches. Current automatic annotation pipelines inevitably introduce errors, making the annotations unreliable. Instead of such error-prone automatic annotations, functional interpretation should rely on annotations of ‘reference proteins’ that have been experimentally characterized or manually curated. Results The Seq2Ref server uses BLAST to detect proteins homologous to a query sequence and identifies the reference proteins among them. Seq2Ref then reports publications with experimental characterizations of the identified reference proteins that might be relevant to the query. Furthermore, a plurality-based rating system is developed to evaluate the homologous relationships and rank the reference proteins by their relevance to the query. Conclusions The reference proteins detected by our server will lend insight into proteins of unknown function and provide extensive information to develop in-depth understanding of uncharacterized proteins. Seq2Ref is available at: http://prodata.swmed.edu/seq2ref.

  6. FELIX - the new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Vermeulen, Jos; Wu, Weihao; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability. Different Front-end data types or different data sources can be routed to different network endpoints that handle that data type or source: e.g. event data, configuration, calibration, detector control, monito...

  7. Interim policy on establishment and operation of internet open, anonymous information servers and services

    OpenAIRE

    Acting Dean of Computer and Information Services

    1995-01-01

    Purpose. To establish interim NPS general policy regarding establishment and operation of Open, Anonymous Information Servers and Services, such as World Wide Web (http), Gopher, Anonymous FTP, etc...

  8. Beginning JSP, JSF, and Tomcat web development from novice to professional

    CERN Document Server

    Zambon, Giulio

    2008-01-01

    A comprehensive introduction to JavaServer Pages (JSP), JavaServer Faces (JSF), and the Apache Tomcat Web application server, this manual makes key concepts easy to grasp by numerous working examples and a walk-through of the development of a complete e-commerce project.

  9. PERANCANGAN MAIL SERVER ZIMBRA MENGGUNAKAN TEKNOLOGI VIRTUALISASI STUDI KASUS : SMK PANCAKARYA KOTA TANGERANG

    Directory of Open Access Journals (Sweden)

    Heru Prasetiawan

    2017-05-01

    Full Text Available The development of information technology is growing rapidly spur the emergence of new technologies are constantly evolving. The development of technologies that generate more reliable, efficient, economical, and powerful than previous technology. Electronic mail (email is a form of communication and correspondence electronically through a computer system and transmitted to another computer that is intended to traverse the computer network. The existence of mail server is needed to support the communication needs via email. Zimbra Mail Server is implemented using virtualization technology with the operating system Proxmox which is a Linux distribution based on Debian and to guestnya operating system SLES (Suse Linux Enterprise Server. This research was conducted at the agency already has a previous computer networking facilities, so that the research was conducted to complement the needs of the mail server at the institution. The result achieved is a mail application server using virtualization technology that has the facilities and the web-based mail client applications, antivirus and antispam.

  10. Web Application Design Using Server-Side JavaScript

    Energy Technology Data Exchange (ETDEWEB)

    Hampton, J.; Simons, R.

    1999-02-01

    This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.

  11. Creació del nucli d'un servidor web

    OpenAIRE

    Corvera Bello, Antoni-Eric

    2013-01-01

    Disseny i implementació d'un nou servidor web multifil i multiplataforma en C++. Diseño e implementación de un nuevo servidor web multihilo y multiplataforma en C++. Dessign and implementation of a multithreaded and multiplatform web server core in C++.

  12. Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF

    Directory of Open Access Journals (Sweden)

    Vanessa Stangherlin Machado Paixão-Cortes

    2018-01-01

    Full Text Available Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C software requirements document and prototype: assessment results guiding the key features that wCReF must

  13. Monitoring and controlling ATLAS data management: The Rucio web user interface

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Vigne, Ralph; Garonne, Vincent

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like ...

  14. Monitoring and controlling ATLAS data management: The Rucio web user interface

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained...

  15. RESTful Web Services Cookbook

    CERN Document Server

    Allamaraju, Subbu

    2010-01-01

    While the REST design philosophy has captured the imagination of web and enterprise developers alike, using this approach to develop real web services is no picnic. This cookbook includes more than 100 recipes to help you take advantage of REST, HTTP, and the infrastructure of the Web. You'll learn ways to design RESTful web services for client and server applications that meet performance, scalability, reliability, and security goals, no matter what programming language and development framework you use. Each recipe includes one or two problem statements, with easy-to-follow, step-by-step i

  16. EnviroAtlas Estimated Intersection Density of Walkable Roads Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in each EnviroAtlas community....

  17. Enc-DNS-HTTP: Utilising DNS Infrastructure to Secure Web Browsing

    Directory of Open Access Journals (Sweden)

    Mohammed Abdulridha Hussain

    2017-01-01

    Full Text Available Online information security is a major concern for both users and companies, since data transferred via the Internet is becoming increasingly sensitive. The World Wide Web uses Hypertext Transfer Protocol (HTTP to transfer information and Secure Sockets Layer (SSL to secure the connection between clients and servers. However, Hypertext Transfer Protocol Secure (HTTPS is vulnerable to attacks that threaten the privacy of information sent between clients and servers. In this paper, we propose Enc-DNS-HTTP for securing client requests, protecting server responses, and withstanding HTTPS attacks. Enc-DNS-HTTP is based on the distribution of a web server public key, which is transferred via a secure communication between client and a Domain Name System (DNS server. This key is used to encrypt client-server communication. The scheme is implemented in the C programming language and tested on a Linux platform. In comparison with Apache HTTPS, this scheme is shown to have more effective resistance to attacks and improved performance since it does not involve a high number of time-consuming operations.

  18. An Improved Algorithm Research on the PrefixSpan Based on the Server Session Constraint

    Directory of Open Access Journals (Sweden)

    Cai Hong-Guo

    2017-01-01

    Full Text Available When we mine long sequential pattern and discover knowledge by the PrefixSpan algorithm in Web Usage Mining (WUM.The elements and the suffix sequences are much more may cause the problem of the calculation, such as the space explosion. To further solve the problem a more effective way is that. Firstly, a server session-based server log file format is proposed. Then the improved algorithm on the PrefixSpan based on server session constraint is discussed for mining frequent Sequential patterns on the website. Finally, the validity and superiority of the method are presented by the experiment in the paper.

  19. Web-Based Distributed Simulation of Aeronautical Propulsion System

    Science.gov (United States)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  20. ATLAS TDAQ system administration: Master of Puppets

    CERN Document Server

    AUTHOR|(SzGeCERN)727357; The ATLAS collaboration; Ballestrero, Sergio; Brasolin, Franco; Fazio, Daniel; Gament, Costin-Eugen; Scannicchio, Diana; Twomey, Matthew Shaun

    2017-01-01

    Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ∼4000 servers processing the data read out from ∼100 million detector channels through multiple trigger levels. The configurtion of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net-booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the applic...