WorldWideScience

Sample records for server database design

  1. Pro SQL Server 2012 relational database design and implementation

    CERN Document Server

    Davidson, Louis

    2012-01-01

    Learn effective and scalable database design techniques in a SQL Server environment. Pro SQL Server 2012 Relational Database Design and Implementation covers everything from design logic that business users will understand, all the way to the physical implementation of design in a SQL Server database. Grounded in best practices and a solid understanding of the underlying theory, Louis Davidson shows how to "get it right" in SQL Server database design and lay a solid groundwork for the future use of valuable business data. Gives a solid foundation in best practices and relational theory Covers

  2. Design of a WWW database server for Atomic Spectroscopy Data

    Energy Technology Data Exchange (ETDEWEB)

    Contis, A

    1995-12-01

    The department of Atomic Spectroscopy at Lund Univ produces large amounts of experimental data on energy levels and emissions for atomic systems. In order to make this data easily available to users outside the institution, a database has been produced and made available on the Internet. This report describes the organization of the data and the Internet interface of the data base. 4 refs.

  3. Design of a WWW database server for Atomic Spectroscopy Data

    International Nuclear Information System (INIS)

    Contis, A.

    1995-12-01

    The department of Atomic Spectroscopy at Lund Univ produces large amounts of experimental data on energy levels and emissions for atomic systems. In order to make this data easily available to users outside the institution, a database has been produced and made available on the Internet. This report describes the organization of the data and the Internet interface of the data base. 4 refs

  4. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  5. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  6. Analysis of Java Distributed Architectures in Designing and Implementing a Client/Server Database System

    National Research Council Canada - National Science Library

    Akin, Ramis

    1998-01-01

    .... Information is scattered throughout organizations and must be easily accessible. A new solution is needed for effective and efficient management of data in today's distributed client/server environment...

  7. Design and implementation of an enterprise information system utilizing a component based three-tier client/server database system

    OpenAIRE

    Akbay, Murat.; Lewis, Steven C.

    1999-01-01

    The Naval Security Group currently requires a modem architecture to merge existing command databases into a single Enterprise Information System through which each command may manipulate administrative data. There are numerous technologies available to build and implement such a system. Component- based architectures are extremely well-suited for creating scalable and flexible three-tier Client/Server systems because the data and business logic are encapsulated within objects, allowing them t...

  8. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  9. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  10. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  11. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  12. Securing SQL server protecting your database from attackers

    CERN Document Server

    Cherry, Denny

    2015-01-01

    SQL server is the most widely-used database platform in the world, and a large percentage of these databases are not properly secured, exposing sensitive customer and business data to attack. In Securing SQL Server, Third Edition, you will learn about the potential attack vectors that can be used to break into SQL server databases as well as how to protect databases from these attacks. In this book, Denny Cherry - a Microsoft SQL MVP and one of the biggest names in SQL server - will teach you how to properly secure an SQL server database from internal and external threats using best practic

  13. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2012-01-01

    Written by Denny Cherry, a Microsoft MVP for the SQL Server product, a Microsoft Certified Master for SQL Server 2008, and one of the biggest names in SQL Server today, Securing SQL Server, Second Edition explores the potential attack vectors someone can use to break into your SQL Server database as well as how to protect your database from these attacks. In this book, you will learn how to properly secure your database from both internal and external threats using best practices and specific tricks the author uses in his role as an independent consultant while working on some of the largest

  14. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2011-01-01

    There is a lot at stake for administrators taking care of servers, since they house sensitive data like credit cards, social security numbers, medical records, and much more. In Securing SQL Server you will learn about the potential attack vectors that can be used to break into your SQL Server database, and how to protect yourself from these attacks. Written by a Microsoft SQL Server MVP, you will learn how to properly secure your database, from both internal and external threats. Best practices and specific tricks employed by the author will also be revealed. Learn expert techniques to protec

  15. License - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server License License to Use This Database Last updated : 2009/09/14 You may use this database...scribed below. The Standard License specifies the license terms regarding the use of this database and the r...equirements you must follow in using this database. The Additional License specif...icense. Standard License The Standard License for this database is the license specified in the Creative Com...mons Attribution-Share Alike 2.1 Japan . If you use data from this database, plea

  16. The SQL Server Database for Non Computer Professional Teaching Reform

    Science.gov (United States)

    Liu, Xiangwei

    2012-01-01

    A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…

  17. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    Science.gov (United States)

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  18. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  19. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  20. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  1. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  2. GPCR & company: databases and servers for GPCRs and interacting partners.

    Science.gov (United States)

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling.

  3. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    Science.gov (United States)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    MARSIS is an orbital synthetic aperture radar for both ionosphere and subsurface sounding on board ESA's Mars Express (Picardi et al. 2005). It transmits electromagnetic pulses centered at 1.8, 3, 4 or 5 MHz that penetrate below the surface and are reflected by compositional and/or structural discontinuities in the subsurface of Mars. MARSIS data are available as a collection of single orbit data files. The availability of tools for a more effective access to such data would greatly ease data analysis and exploitation by the community of users. For this purpose, we are developing a database built on the raster database management system RasDaMan (e.g. Baumann et al., 1994), to be populated with MARSIS data and integrated in the PlanetServer/EarthServer (e.g. Oosthoek et al., 2013; Rossi et al., this meeting) project. The data (and related metadata) are stored in the db for each frequency used by MARSIS radar. The capability of retrieving data belonging to a certain orbit or to multiple orbit on the base of latitute/longitude boundaries is a key requirement of the db design, allowing, besides the "classical" radargram representation of the data, and in area with sufficiently hight orbit density, a 3D data extraction, subset and analysis of subsurface structures. Moreover the use of the OGC WCPS (Web Coverage Processing Service) standard can allow calculations on database query results for multiple echoes and/or subsets of a certain data product. Because of the low directivity of its dipole antenna, MARSIS receives echoes from portions of the surface of Mars that are distant from nadir and can be mistakenly interpreted as subsurface echoes. For this reason, methods have been developed to simulate surface echoes (e.g. Nouvel et al., 2004), to reveal the true origin of an echo through comparison with instrument data. These simulations are usually time-consuming, and so far have been performed either on a case-by-case basis or in some simplified form. A code for

  4. Parameters for Organism Grouping - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Parameters for Organism Grouping Data detail Data name Parameters for Organism...his Database Site Policy | Contact Us Parameters for Organism Grouping - Gclust Server | LSDB Archive ...

  5. Clustering results - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Clustering results Data detail Data name Clustering results DOI 10.18908/lsdba...se Update History of This Database Site Policy | Contact Us Clustering results - Gclust Server | LSDB Archive ...

  6. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  7. Brillouin-zone database on the Bilbao Crystallographic Server.

    Science.gov (United States)

    Aroyo, Mois I; Orobengoa, Danel; de la Flor, Gemma; Tasci, Emre S; Perez-Mato, J Manuel; Wondratschek, Hans

    2014-03-01

    The Brillouin-zone database of the Bilbao Crystallographic Server (http://www.cryst.ehu.es) offers k-vector tables and figures which form the background of a classification of the irreducible representations of all 230 space groups. The symmetry properties of the wavevectors are described by the so-called reciprocal-space groups and this classification scheme is compared with the classification of Cracknell et al. [Kronecker Product Tables, Vol. 1, General Introduction and Tables of Irreducible Representations of Space Groups (1979). New York: IFI/Plenum]. The compilation provides a solution to the problems of uniqueness and completeness of space-group representations by specifying the independent parameter ranges of general and special k vectors. Guides to the k-vector tables and figures explain the content and arrangement of the data. Recent improvements and modifications of the Brillouin-zone database, including new tables and figures for the trigonal, hexagonal and monoclinic space groups, are discussed in detail and illustrated by several examples.

  8. Asynchronous data change notification between database server and accelerator controls system

    International Nuclear Information System (INIS)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-01-01

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  9. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  10. Asynchronous data change notification between database server and accelerator control systems

    International Nuclear Information System (INIS)

    Wenge Fu; Seth Nemesure; Morris, J.

    2012-01-01

    Database data change notification (DCN) is a commonly used feature, it allows to be informed when the data has been changed on the server side by another client. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. (authors)

  11. MonetDB/SQL Meets SkyServer: the Challenges of a Scientific Database.

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); N.J. Nes (Niels); R.A. Goncalves (Romulo); M.L. Kersten (Martin)

    2007-01-01

    textabstractThis paper presents our experiences in porting the Sloan Digital Sky Survey(SDSS)/ SkyServer to the state-of-the-art open source database system MonetDB/SQL. SDSS acts as a well-documented benchmark for scientific database management. We have achieved a fully functional prototype for the

  12. The SAMGrid database server component: its upgraded infrastructure and future development path

    International Nuclear Information System (INIS)

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.

    2004-01-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema

  13. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  14. Database server Oracle8i; DB saver Oracle8i

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Oracle8i is a DBMS (DataBase Management System) with its functions augmented for use with a DWH (Data Ware House). Augmented functions which are important includes functions of hash partition and composite partition that enable high-speed retrieval/management of large data and a function of materialized view that enables high-speed data counting. Also augmented in terms of usability are functions involving automatic standby, on-line index (re-)editing, and instance recovery. Although Oracle8i names the development and operation in coordination with Java as an important augmented item, this remains to be supported at the current stage. (translated by NEDO)

  15. Prefix list for each organism - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Prefix list for each organism Data detail Data name Prefix list for each organi...sm DOI 10.18908/lsdba.nbdc00464-006 Description of data contents List of prefixes for organisms used in Gcl...ust. Each prefix is applied to the top of the sequence ID according to each organism. The first line specifies the number of organi...sm species (95). From the second line, the prefix of each organi... Database Site Policy | Contact Us Prefix list for each organism - Gclust Server | LSDB Archive ...

  16. Construction of database server system for fuel thermo-physical properties

    International Nuclear Information System (INIS)

    Park, Chang Je; Kang, Kwon Ho; Song, Kee Chan

    2003-12-01

    To perform the evaluation of various fuels in the nuclear reactors, not only the mechanical properties but also thermo-physical properties are required as one of most important inputs for fuel performance code system. The main objective of this study is to make a database system for fuel thermo-physical properties and a PC-based hardware system has been constructed for ease use for the public with visualization such as web-based server system. This report deals with the hardware and software which are used in the database server system for nuclear fuel thermo-physical properties. It is expected to be highly useful to obtain nuclear fuel data without such a difficulty through opening the database of fuel properties to the public and is also helpful to research of development of various fuel of nuclear industry. Furthermore, the proposed models of nuclear fuel thermo-physical properties will be enough utilized to the fuel performance code system

  17. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources

    Science.gov (United States)

    2009-01-01

    Background DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. Results We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Conclusion Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org. PMID:19958506

  18. Client Server design and implementation issues in the Accelerator Control System environment

    International Nuclear Information System (INIS)

    Sathe, S.; Hoff, L.; Clifford, T.

    1995-01-01

    In distributed system communication software design, the Client Server model has been widely used. This paper addresses the design and implementation issues of such a model, particularly when used in Accelerator Control Systems. in designing the Client Server model one needs to decide how the services will be defined for a server, what types of messages the server will respond to, which data formats will be used for the network transactions and how the server will be located by the client. Special consideration needs to be given to error handling both on the server and client side. Since the server usually is located on a machine other than the client, easy and informative server diagnostic capability is required. The higher level abstraction provided by the Client Server model simplifies the application writing, however fine control over network parameters is essential to improve the performance. Above mentioned design issues and implementation trade-offs are discussed in this paper

  19. Design of Accelerator Online Simulator Server Using Structured Data

    International Nuclear Information System (INIS)

    Shen, Guobao

    2012-01-01

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.

  20. Beginning Microsoft SQL Server 2012 Programming

    CERN Document Server

    Atkinson, Paul

    2012-01-01

    Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho

  1. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  2. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  3. Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server

    Directory of Open Access Journals (Sweden)

    Arie Nugraha

    2014-07-01

    Full Text Available Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description already stored inside a relational DBMS such as MariaDB (a fork of MySQL or PostgreSQL, Sphinx Search Server (Sphinx is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

  4. Using a centralised database system and server in the European Union Framework Programme 7 project SEPServer

    Science.gov (United States)

    Heynderickx, Daniel

    2012-07-01

    The main objective of the SEPServer project (EU FP7 project 262773) is to produce a new tool, which greatly facilitates the investigation of solar energetic particles (SEPs) and their origin: a server providing SEP data, related electromagnetic (EM) observations and analysis methods, a comprehensive catalogue of the observed SEP events, and educational/outreach material on solar eruptions. The project is coordinated by the University of Helsinki. The project will combine data and knowledge from 11 European partners and several collaborating parties from Europe and US. The datasets provided by the consortium partners are collected in a MySQL database (using the ESA Open Data Interface under licence) on a server operated by DH Consultancy, which also hosts a web interface providing browsing, plotting and post-processing and analysis tools developed by the consortium, as well as a Solar Energetic Particle event catalogue. At this stage of the project, a prototype server has been established, which is presently undergoing testing by users inside the consortium. Using a centralized database has numerous advantages, including: homogeneous storage of the data, which eliminates the need for dataset specific file access routines once the data are ingested in the database; a homogeneous set of metadata describing the datasets on both a global and detailed level, allowing for automated access to and presentation of the various data products; standardised access to the data in different programming environments (e.g. php, IDL); elimination of the need to download data for individual data requests. SEPServer will, thus, add value to several space missions and Earth-based observations by facilitating the coordinated exploitation of and open access to SEP data and related EM observations, and promoting correct use of these data for the entire space research community. This will lead to new knowledge on the production and transport of SEPs during solar eruptions and facilitate the

  5. Object-oriented design for LHD data acquisition using client-server model

    International Nuclear Information System (INIS)

    Kojima, M.; Nakanishi, H.; Hidekuma, S.

    1997-11-01

    The LHD data acquisition system handles a huge amount of data exceeding over 600MB per shot. The fully distributed processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object-sharing and the class libraries will provide the unified way of data handling for both servers and clients program developments. The object class libraries are written in C ++ , and the network object-sharing is provided through a commercial software called HARNESS. As for the CAMAC setup, the Java script can use the C ++ class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling softwares which work on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between clients and servers. A lump of diagnostic data mass is treated as a part of an object by the object-oriented programming. (author)

  6. Object-oriented designs for LHD data acquisitions using client-server model

    International Nuclear Information System (INIS)

    Kojima, M.; Nakanishi, H.; Hidekuma, S.

    1999-01-01

    The LHD data acquisition system handles >600 MB data per shot. The fully distributed data processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object sharing and class libraries will provide the unified way of data handling for the network client-server programming. The object class libraries are described in C++, and the network object sharing is provided through the commercial software named HARNESS. As for the CAMAC setup, the Java script can use the C++ class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling software on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between client and server computers. A lump of the diagnostic data can be treated as part of an object by the object-oriented programming. (orig.)

  7. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  9. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  10. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  11. The design of distributed database system for HIRFL

    International Nuclear Information System (INIS)

    Wang Hong; Huang Xinmin

    2004-01-01

    This paper is focused on a kind of distributed database system used in HIRFL distributed control system. The database of this distributed database system is established by SQL Server 2000, and its application system adopts the Client/Server model. Visual C ++ is used to develop the applications, and the application uses ODBC to access the database. (authors)

  12. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  13. Design of SIP transformation server for efficient media negotiation

    Science.gov (United States)

    Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee

    2001-07-01

    Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.

  14. Proteins in similarity relationship with the cluster - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Proteins in similarity relationship with the cluster Data detail Data name Pro...teins in similarity relationship with the cluster DOI 10.18908/lsdba.nbdc00464-003 Description of data conte...s Proteins in similarity relationship with the cluster - Gclust Server | LSDB Archive ...

  15. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    Science.gov (United States)

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  17. Design and realization of multithread CAMAC communication server software based on Winsock

    International Nuclear Information System (INIS)

    Zhang Xia

    2002-01-01

    The author describes the CAMAC communication server software which applies Winsock and multithread techniques. The design method of the whole software is given. The realization of network communication service and the synchronization problem of multithread are introduced in detail

  18. Contributions to Logical Database Design

    Directory of Open Access Journals (Sweden)

    Vitalie COTELEA

    2012-01-01

    Full Text Available This paper treats the problems arising at the stage of logical database design. It comprises a synthesis of the most common inference models of functional dependencies, deals with the problems of building covers for sets of functional dependencies, makes a synthesizes of normal forms, presents trends regarding normalization algorithms and provides a temporal complexity of those. In addition, it presents a summary of the most known keys’ search algorithms, deals with issues of analysis and testing of relational schemes. It also summarizes and compares the different features of recognition of acyclic database schemas.

  19. Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Table of Cluster and Organism Species Number Data detail Data name Table of Cluster and Organism...resentative sequence ID of cluster, its length, the number of sequences contained in the cluster, organism s...pecies, the number of sequences belonging to the cluster for each of 95 organism ...t Us Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive ...

  20. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Science.gov (United States)

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. RNAiFold: a web server for RNA inverse folding and molecular design.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  2. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  3. Database design for Physical Access Control System for nuclear facilities

    International Nuclear Information System (INIS)

    Sathishkumar, T.; Rao, G. Prabhakara; Arumugam, P.

    2016-01-01

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  4. Toward automating the database design process

    International Nuclear Information System (INIS)

    Asprey, P.L.

    1979-01-01

    One organization's approach to designing complex, interrelated databases is described. The problems encountered and the techniques developed are discussed. A set of software tools to aid the designer and to produce an initial database design directly is presented. 5 figures

  5. ProBiS tools (algorithm, database, and web servers) for predicting and modeling of biologically interesting proteins.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2017-09-01

    ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    Science.gov (United States)

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  7. Beginning database design from novice to professional

    CERN Document Server

    Churcher, Clare

    2012-01-01

    Beginning Database Design, Second Edition provides short, easy-to-read explanations of how to get database design right the first time. This book offers numerous examples to help you avoid the many pitfalls that entrap new and not-so-new database designers. Through the help of use cases and class diagrams modeled in the UML, you'll learn to discover and represent the details and scope of any design problem you choose to attack. Database design is not an exact science. Many are surprised to find that problems with their databases are caused by poor design rather than by difficulties in using th

  8. Design of database management system for 60Co container inspection system

    International Nuclear Information System (INIS)

    Liu Jinhui; Wu Zhifang

    2007-01-01

    The function of the database management system has been designed according to the features of cobalt-60 container inspection system. And the software related to the function has been constructed. The database querying and searching are included in the software. The database operation program is constructed based on Microsoft SQL server and Visual C ++ under Windows 2000. The software realizes database querying, image and graph displaying, statistic, report form and its printing, interface designing, etc. The software is powerful and flexible for operation and information querying. And it has been successfully used in the real database management system of cobalt-60 container inspection system. (authors)

  9. Web Application Design Using Server-Side JavaScript

    Energy Technology Data Exchange (ETDEWEB)

    Hampton, J.; Simons, R.

    1999-02-01

    This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.

  10. Web server of the Centre for Photonuclear Experiments Data of the Scientific Research Institute for Nuclear Physics, Moscow State University: Hypertext version of the nuclear physics database

    Energy Technology Data Exchange (ETDEWEB)

    Boboshin, I N; Varlamov, A V; Varlamov, V V; Rudenko, D S; Stepanov, M E [D.V. Skobel' tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, Centre for Photonuclear Experiments Data (Russian Federation)

    2001-02-01

    The nuclear databases which have been developed at the Centre for Photonuclear Experiments Data of the D.V. Skobel'tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, and put on the Centre's web server, are presented. The possibilities for working with these databases on the Internet are described. (author)

  11. Web server of the Centre for Photonuclear Experiments Data of the Scientific Research Institute for Nuclear Physics, Moscow State University: Hypertext version of the nuclear physics database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, A.V.; Varlamov, V.V.; Rudenko, D.S.; Stepanov, M.E.

    2001-01-01

    The nuclear databases which have been developed at the Centre for Photonuclear Experiments Data of the D.V. Skobel'tsyn Scientific Research Institute for Nuclear Physics, M.V. Lomonosov Moscow State University, and put on the Centre's web server, are presented. The possibilities for working with these databases on the Internet are described. (author)

  12. Automating Relational Database Design for Microcomputer Users.

    Science.gov (United States)

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  13. KALIMER design database development and operation manual

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment

  14. KALIMER design database development and operation manual

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment.

  15. CRISPR-FOCUS: A web server for designing focused CRISPR screening experiments

    OpenAIRE

    Cao, Qingyi; Ma, Jian; Chen, Chen-Hao; Xu, Han; Chen, Zhi; Li, Wei; Liu, X. Shirley

    2017-01-01

    The recently developed CRISPR screen technology, based on the CRISPR/Cas9 genome editing system, enables genome-wide interrogation of gene functions in an efficient and cost-effective manner. Although many computational algorithms and web servers have been developed to design single-guide RNAs (sgRNAs) with high specificity and efficiency, algorithms specifically designed for conducting CRISPR screens are still lacking. Here we present CRISPR-FOCUS, a web-based platform to search and prioriti...

  16. Amino acid sequences of predicted proteins and their annotation for 95 organism species. - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Amino acid sequences of predicted proteins and their annotation for 95 organis...m species. Data detail Data name Amino acid sequences of predicted proteins and their annotation for 95 orga...nism species. DOI 10.18908/lsdba.nbdc00464-001 Description of data contents Amino acid sequences of predicted proteins...Database Description Download License Update History of This Database Site Policy | Contact Us Amino acid sequences of predicted prot...eins and their annotation for 95 organism species. - Gclust Server | LSDB Archive ...

  17. FireProt: web server for automated design of thermostable proteins

    Science.gov (United States)

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  18. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    Science.gov (United States)

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Relational Database Design in Information Science Education.

    Science.gov (United States)

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  20. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  1. BAPA Database: a Landslide Inventory in the Principality of Asturias (NW Spain) by Using Press Archives and Free Cartographic Servers

    Science.gov (United States)

    Valenzuela, P.; Domínguez-Cuesta, M. J.; Jiménez-Sánchez, M.; Mora García, M. A.

    2015-12-01

    Due to its geological and climatic conditions, landslides are very common and widespread phenomena in the Principality of Asturias (NW of Spain), causing economic losses and, sometimes, human victims. In this scenario, temporal prediction of instabilities becomes particularly important. Although previous knowledge indicates that rainfall is the main trigger, the lack of data hinders the proper temporal forecast of landslides in the region. To resolve this deficiency, a new landslide inventory is being developed: the BAPA (Base de datos de Argayos del Principado de Asturias-Principality of Asturias Landslide Database). Data collection is mainly performed through the gathering of local newspaper archives, with special emphasis on the registration of spatial and temporal information. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to easily obtain additional information from authorities and private individuals. Presently, dataset covers the period 1980-2015, registering more than 2000 individual landslide events. Fifty-two per cent of the records provide accurate dates, showing the usefulness of press archives as temporal records. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and photographs contained in the press releases, makes it possible to determine the exact location in fifty-eight per cent of the records. Field work performed to date has allowed the validation of the methodology proposed to obtain spatial data. In addition, BAPA database contain information about: source, typology of landslides, triggers, damages and costs.

  2. TcruziDB, an Integrated Database, and the WWW Information Server for the Trypanosoma cruzi Genome Project

    Directory of Open Access Journals (Sweden)

    Degrave Wim

    1997-01-01

    Full Text Available Data analysis, presentation and distribution is of utmost importance to a genome project. A public domain software, ACeDB, has been chosen as the common basis for parasite genome databases, and a first release of TcruziDB, the Trypanosoma cruzi genome database, is available by ftp from ftp://iris.dbbm.fiocruz.br/pub/genomedb/TcruziDB as well as versions of the software for different operating systems (ftp://iris.dbbm.fiocruz.br/pub/unixsoft/. Moreover, data originated from the project are available from the WWW server at http://www.dbbm.fiocruz.br. It contains biological and parasitological data on CL Brener, its karyotype, all available T. cruzi sequences from Genbank, data on the EST-sequencing project and on available libraries, a T. cruzi codon table and a listing of activities and participating groups in the genome project, as well as meeting reports. T. cruzi discussion lists (tcruzi-l@iris.dbbm.fiocruz.br and tcgenics@iris.dbbm.fiocruz.br are being maintained for communication and to promote collaboration in the genome project

  3. Cluster based on sequence comparison of homologous proteins of 95 organism species - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Cluster based on sequence comparison of homologous proteins of 95 organism spe...cies Data detail Data name Cluster based on sequence comparison of homologous proteins of 95 organism specie...istory of This Database Site Policy | Contact Us Cluster based on sequence compariso

  4. System Level Design of Reconfigurable Server Farms Using Elliptic Curve Cryptography Processor Engines

    Directory of Open Access Journals (Sweden)

    Sangook Moon

    2014-01-01

    Full Text Available As today’s hardware architecture becomes more and more complicated, it is getting harder to modify or improve the microarchitecture of a design in register transfer level (RTL. Consequently, traditional methods we have used to develop a design are not capable of coping with complex designs. In this paper, we suggest a way of designing complex digital logic circuits with a soft and advanced type of SystemVerilog at an electronic system level. We apply the concept of design-and-reuse with a high level of abstraction to implement elliptic curve crypto-processor server farms. With the concept of the superior level of abstraction to the RTL used with the traditional HDL design, we successfully achieved the soft implementation of the crypto-processor server farms as well as robust test bench code with trivial effort in the same simulation environment. Otherwise, it could have required error-prone Verilog simulations for the hardware IPs and other time-consuming jobs such as C/SystemC verification for the software, sacrificing more time and effort. In the design of the elliptic curve cryptography processor engine, we propose a 3X faster GF(2m serial multiplication architecture.

  5. Improvement of the Oracle setup and database design at the Heidelberg ion therapy center

    International Nuclear Information System (INIS)

    Hoeppner, K.; Haberer, T.; Mosthaf, J.M.; Peters, A.; Thomas, M.; Welde, A.; Froehlich, G.; Juelicher, S.; Schaa, V. R.W.; Schiebel, W.; Steinmetz, S.

    2012-01-01

    The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle rate used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis that focused on the following topics: hardware resources of the database server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints. (authors)

  6. A real time multi-server multi-client coherent database for a new high voltage system

    International Nuclear Information System (INIS)

    Gorbics, M.; Green, M.

    1995-01-01

    A high voltage system has been designed to allow multiple users (clients) access to the database of measured values and settings. This database is actively maintained in real time for a given mainframe containing multiple modules each having their own database. With limited CPU nd memory resources the mainframe system provides a data coherency scheme for multiple clients which (1) allows the client to determine when and what values need to be updated, (2) allows for changes from one client to be detected by another client, and (3) does not depend on the mainframe system tracking client accesses

  7. incaRNAfbinv: a web server for the fragment-based design of RNA sequences

    Science.gov (United States)

    Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny

    2016-01-01

    Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893

  8. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  9. Foundations of SQL Server 2008 R2 Business Intelligence

    CERN Document Server

    Fouche, Guy

    2011-01-01

    Foundations of SQL Server 2008 R2 Business Intelligence introduces the entire exciting gamut of business intelligence tools included with SQL Server 2008. Microsoft has designed SQL Server 2008 to be more than just a database. It's a complete business intelligence (BI) platform. The database is at its core, and surrounding the core are tools for data mining, modeling, reporting, analyzing, charting, and integration with other enterprise-level software packages. SQL Server 2008 puts an incredible amount of BI functionality at your disposal. But how do you take advantage of it? That's what this

  10. Design and implementation of streaming media server cluster based on FFMpeg.

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  11. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  12. VizPrimer: a web server for visualized PCR primer design based on known gene structure.

    Science.gov (United States)

    Zhou, Yang; Qu, Wubin; Lu, Yiming; Zhang, Yanchun; Wang, Xiaolei; Zhao, Dongsheng; Yang, Yi; Zhang, Chenggang

    2011-12-15

    The visualization of gene structure plays an important role in polymerase chain reaction (PCR) primer design, especially for eukaryotic genes with a number of splice variants that users need to distinguish between via PCR. Here, we describe a visualized web server for primer design named VizPrimer. It utilizes the new information technology (IT) tools, HTML5 to display gene structure and JavaScript to interact with the users. In VizPrimer, the users can focus their attention on the gene structure and primer design strategy, without wasting time calculating the exon positions of splice variants or manually configuring complicated parameters. In addition, VizPrimer is also suitable for the design of PCR primers for amplifying open reading frames and detecting single nucleotide polymorphisms (SNPs). VizPrimer is freely available at http://biocompute.bmi.ac.cn/CZlab/VizPrimer/. The web server supported browsers: Chrome (≥5.0), Firefox (≥3.0), Safari (≥4.0) and Opera (≥10.0). zhangcg@bmi.ac.cn; yangyi528@vip.sina.com.

  13. Microsoft SQL Server 2012 bible

    CERN Document Server

    Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron

    2012-01-01

    Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p

  14. Evolution of the Configuration Database Design

    International Nuclear Information System (INIS)

    Salnikov, A.

    2006-01-01

    The BABAR experiment at SLAC successfully collects physics data since 1999. One of the major parts of its on-line system is the configuration database which provides other parts of the system with the configuration data necessary for data taking. Originally the configuration database was implemented in the Objectivity/DB ODBMS. Recently BABAR performed a successful migration of its event store from Objectivity/DB to ROOT and this prompted a complete phase-out of the Objectivity/DB in all other BABAR databases. It required the complete redesign of the configuration database to hide any implementation details and to support multiple storage technologies. In this paper we describe the process of the migration of the configuration database, its new design, implementation strategy and details

  15. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  16. The design and implementation about the project of optimizing proxy servers

    International Nuclear Information System (INIS)

    Wu Ling; Liu Baoxu

    2006-01-01

    Proxy server is an important facility in the network of an organization, which play an important role in security and access control and accelerating access of Internet. This article introduces the action of proxy servers, and expounds the resolutions to optimize proxy servers at IHEP: integration, dynamic domain name resolves and data synchronization. (authors)

  17. Design, Implementation and Testing of a Tiny Multi-Threaded DNS64 Server

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-03-01

    Full Text Available DNS64 is going to be an important service (together with NAT64 in the upcoming years of the IPv6 transition enabling the clients having only IPv6 addresses to reach the servers having only IPv4 addresses (the majority of the servers on the Internet today. This paper describes the design, implementation and functional testing of MTD64, a flexible, easy to use, multi-threaded DNS64 proxy published as a free software under the GPLv2 license. All the theoretical background is introduced including the DNS message format, the operation of the DNS64 plus NAT64 solution and the construction of the IPv4-embedded IPv6 addresses. Our design decisions are fully disclosed from the high level ones to the details. Implementation is introduced at high level only as the details can be found in the developer documentation. The most important parts of a through functional testing are included as well as the results of some basic performance comparison with BIND.

  18. Designing the Database of Speech Under Stress

    Directory of Open Access Journals (Sweden)

    Sabo Róbert

    2017-12-01

    Full Text Available This study describes the methodology used for designing a database of speech under real stress. Based on limits of existing stress databases, we used a communication task via a computer game to collect speech data. To validate the presence of stress, known psychophysiological indicators such as heart rate and electrodermal activity, as well as subjective self-assessment were used. This paper presents the data from first 5 speakers (3 men, 2 women who participated in initial tests of the proposed design. In 4 out of 5 speakers increases in fundamental frequency and intensity of speech were registered. Similarly, in 4 out of 5 speakers heart rate was significantly increased during the task, when compared with reference measurement from before the task. These first results show that proposed design might be appropriate for building a speech under stress database. However, there are still considerations that need to be addressed.

  19. Relational databases for SSC design and control

    International Nuclear Information System (INIS)

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  20. Design research of uranium mine borehole database

    International Nuclear Information System (INIS)

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  1. Karst database development in Minnesota: Design and data assembly

    Science.gov (United States)

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  2. Designing for Peta-Scale in the LSST Database

    Science.gov (United States)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  3. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  4. An Electronic Healthcare Record Server Implemented in PostgreSQL

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2015-01-01

    Full Text Available This paper describes the implementation of an Electronic Healthcare Record server inside a PostgreSQL relational database without dependency on any further middleware infrastructure. The five-part international standard for communicating healthcare records (ISO EN 13606 is used as the information basis for the design of the server. We describe some of the features that this standard demands that are provided by the server, and other areas where assumptions about the durability of communications or the presence of middleware lead to a poor fit. Finally, we discuss the use of the server in two real-world scenarios including a commercial application.

  5. Heuristic program to design Relational Databases

    Directory of Open Access Journals (Sweden)

    Manuel Pereira Rosa

    2009-09-01

    Full Text Available The great development of today’s world determines that the world level of information increases day after day, however, the time allowed to transmit this information in the classrooms has not changed. Thus, the rational work in this respect is more than necessary. Besides, if for the solution of a given type of problem we do not have a working algorism, we have, first to look for a correct solution, then the heuristic programs are of paramount importance to succeed in these aspects. Having into consideration that the design of the database is, essentially, a process of problem resolution, this article aims at proposing a heuristic program for the design of the relating database.

  6. Design heuristic for parallel many server systems under FCFS-ALIS

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Boon, M.; Weiss, G.

    2016-01-01

    We study a parallel service queueing system with servers of types $s_1,\\ldots,s_J$, customers of types $c_1,\\ldots,c_I$, bipartite compatibility graph $\\mathcal{G}$, where arc $(c_i, s_j)$ indicates that server type $s_j$ can serve customer type $c_i$, and service policy of first come first served

  7. Design and Delivery of Multiple Server-Side Computer Languages Course

    Science.gov (United States)

    Wang, Shouhong; Wang, Hai

    2011-01-01

    Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…

  8. Pattern database applications from design to manufacturing

    Science.gov (United States)

    Zhuang, Linda; Zhu, Annie; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    Pattern-based approaches are becoming more common and popular as the industry moves to advanced technology nodes. At the beginning of a new technology node, a library of process weak point patterns for physical and electrical verification are starting to build up and used to prevent known hotspots from re-occurring on new designs. Then the pattern set is expanded to create test keys for process development in order to verify the manufacturing capability and precheck new tape-out designs for any potential yield detractors. With the database growing, the adoption of pattern-based approaches has expanded from design flows to technology development and then needed for mass-production purposes. This paper will present the complete downstream working flows of a design pattern database(PDB). This pattern-based data analysis flow covers different applications across different functional teams from generating enhancement kits to improving design manufacturability, populating new testing design data based on previous-learning, generating analysis data to improve mass-production efficiency and manufacturing equipment in-line control to check machine status consistency across different fab sites.

  9. Analysis, Design and Implementation of a Web Database With Oracle 8I

    National Research Council Canada - National Science Library

    Demiryurek, Ugur

    2001-01-01

    ....O served as the OS environment From the technical aspect, Database Management Systems, Web-Database Architectures, Server Extension Programs, Oracle8i as well as several other software and hardware...

  10. Microsoft Exchange Server 2013 design, deploy and deliver an enterprise messaging solution

    CERN Document Server

    Winters, Nathan; Blank, Nicolas

    2013-01-01

    Successfully deploy a top-quality Exchange messaging service Rolling out a major messaging service with Microsoft Exchange Server 2013 requires that you not only understand the functionality of this exciting new release, but that you fully grasp all aspects of the larger Exchange server ecosystem as well. This practical book is your best field guide to it all. Written for administrators and consultants in the trenches, this innovative new guide begins with key concepts of Microsoft Exchange Server 2013 and then moves through the recommended practices and processes that are neces

  11. Designing a scalable video-on-demand server with data sharing

    Science.gov (United States)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  12. [Design and establishment of modern literature database about acupuncture Deqi].

    Science.gov (United States)

    Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao

    2015-02-01

    A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.

  13. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  14. Design of an Electronic Healthcare Record Server Based on Part 1 of ISO EN 13606

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2011-01-01

    Full Text Available ISO EN 13606 is a newly approved standard at European and ISO levels for the meaningful exchange of clinical information between systems. Although conceived as an inter-operability standard to which existing electronic health record (EHR systems will transform legacy data, the requirements met and architectural approach reflected in this standard also make it a good candidate for the internal architecture of an EHR server. The authors have built such a server for the storage of healthcare records and demonstrated that it is possible to use ISO EN 13606 part 1 as the basis of an internal system architecture. The development of the system and some of the applications of the server are described in this paper. It is the first known operational implementation of the standard as an EHR system.

  15. Toward designing for trust in database automation

    Energy Technology Data Exchange (ETDEWEB)

    Duez, P. P.; Jamieson, G. A. [Cognitive Engineering Laboratory, Univ. of Toronto, 5 King' s College Rd., Toronto, Ont. M5S 3G8 (Canada)

    2006-07-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  16. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Duez, P. P.; Jamieson, G. A.

    2006-01-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  17. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  18. Colorado Late Cenozoic Fault and Fold Database and Internet Map Server: User-friendly technology for complex information

    Science.gov (United States)

    Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.

    2005-01-01

    Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.

  19. Design of Qualitative HRA Database Structure

    International Nuclear Information System (INIS)

    Kim, Seunghwan; Kim, Yochan; Choi, Sun Yeong; Park, Jinkyun; Jung, Wondea

    2015-01-01

    HRA DB is to collect and store the data in a database form to manage and maintain them from the perspective of human reliability analysis. All information on the human errors taken by operators in the power plant should be systematically collected and documented in its management. KAERI is developing the simulator-based HRA data handbook. In this study, the information required to store and manage the data necessary to perform an HRA as to store the HRA data to be stored in the handbook is identified and summarized. Especially this study is to summarize the collection and classification of qualitative data as the raw data to organize the data required to draw the HEP and its DB process. Qualitative HRA DB is a storehouse of all sub-information needed to receive the human error probability for Pasa. In this study, the requirements for structural design and implementation of qualitative HRA DB must be implemented for HRA DB were summarized. The follow-up study of the quantitative HRA DB implementation should be followed to draw the substantial HEP

  20. An Array Library for Microsoft SQL Server with Astrophysical Applications

    Science.gov (United States)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory

  1. Development of Information Technology of Object-relational Databases Design

    Directory of Open Access Journals (Sweden)

    Valentyn A. Filatov

    2012-12-01

    Full Text Available The article is concerned with the development of information technology of object-relational databases design and study of object features infological and logical database schemes entities and connections.

  2. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  3. Design of Database System of HIRFL-CSR Beam Line

    International Nuclear Information System (INIS)

    Li Peng; Li Ke; Yin Dayu; Yuan Youjin; Gou Shizhe

    2009-01-01

    This paper introduces the database design and optimization for the power supply system of Lanzhou Heavy Ion Accelerator CSR (HIRFL-CSR) beam line. Based on HIFEL-CSR main Oracle database system, the interface was designed to read parameters of the power supply while achieving real-time monitoring. A new database system to store the history data of power supplies was established at the same time, and it realized the data exchange between Oracle database system and Access database system. Meanwhile, the interface was designed conveniently for printing and query parameters. (authors)

  4. Database/Operating System Co-Design

    OpenAIRE

    Giceva, Jana

    2016-01-01

    We want to investigate how to improve the information flow between a database and an operating system, aiming for better scheduling and smarter resource management. We are interested in identifying the potential optimizations that can be achieved with a better interaction between a database engine and the underlying operating system, especially by allowing the application to get more control over scheduling and memory management decisions. Therefore, we explored some of the issues that arise ...

  5. Designing a database for performance assessment: Lessons learned from WIPP

    International Nuclear Information System (INIS)

    Martell, M.A.; Schenker, A.

    1997-01-01

    The Waste Isolation Pilot Plant (WIPP) Compliance Certification Application (CCA) Performance Assessment (PA) used a relational database that was originally designed only to supply the input parameters required for implementation of the PA codes. Reviewers used the database as a point of entry to audit quality assurance measures for control, traceability, and retrievability of input information used for analysis, and output/work products. During these audits it became apparent that modifications to the architecture and scope of the database would benefit the EPA regulator and other stakeholders when reviewing the recertification application. This paper contains a discussion of the WPP PA CCA database and lessons learned for designing a database

  6. Method for a dummy CD mirror server based on NAS

    Science.gov (United States)

    Tang, Muna; Pei, Jing

    2002-09-01

    With the development of computer network, information sharing is becoming the necessity in human life. The rapid development of CD-ROM and CD-ROM driver techniques makes it possible to issue large database online. After comparing many designs of dummy CD mirror database, which are the embodiment of a main product in CD-ROM database now and in near future, we proposed and realized a new PC based scheme. Our system has the following merits, such as, supporting all kinds of CD format; supporting many network protocol; the independence of mirror network server and the main server; low price, super large capacity, without the need of any special hardware. Preliminarily experiments have verified the validity of the proposed scheme. Encouraged by the promising application future, we are now preparing to put it into market. This paper discusses the design and implement of the CD-ROM server detailedly.

  7. Designation of organism group - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available search(/contents-en/) != -1 || url.search(/index-e.html/) != -1 ) { document.getElementById(lang).innerHTML=.../) != -1 ) { url = url.replace(-e.html,.html); document.getElementById(lang).innerHTML=[ Japanese |...en/,/jp/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if ( url.search(//contents...//) != -1 ) { url = url.replace(/contents/,/contents-en/); document.getElementById(lang).innerHTML=[ Japanes...e(/contents-en/,/contents/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if( url.

  8. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  9. Conceptual design of nuclear power plants database system

    International Nuclear Information System (INIS)

    Ishikawa, Masaaki; Izumi, Fumio; Sudoh, Takashi.

    1984-03-01

    This report is the result of the joint study on the developments of the nuclear power plants database system. The present conceptual design of the database system, which includes Japanese character processing and image processing, has been made on the data of safety design parameters mainly found in the application documents for reactor construction permit made available to the public. (author)

  10. A Database Design and Development Case: NanoTEK Networks

    Science.gov (United States)

    Ballenger, Robert M.

    2010-01-01

    This case provides a real-world project-oriented case study for students enrolled in a management information systems, database management, or systems analysis and design course in which database design and development are taught. The case consists of a business scenario to provide background information and details of the unique operating…

  11. Database structure and file layout of Nuclear Power Plant Database. Database for design information on Light Water Reactors in Japan

    International Nuclear Information System (INIS)

    Yamamoto, Nobuo; Izumi, Fumio.

    1995-12-01

    The Nuclear Power Plant Database (PPD) has been developed at the Japan Atomic Energy Research Institute (JAERI) to provide plant design information on domestic Light Water Reactors (LWRs) to be used for nuclear safety research and so forth. This database can run on the main frame computer in the JAERI Tokai Establishment. The PPD contains the information on the plant design concepts, the numbers, capacities, materials, structures and types of equipment and components, etc, based on the safety analysis reports of the domestic LWRs. This report describes the details of the PPD focusing on the database structure and layout of data files so that the users can utilize it efficiently. (author)

  12. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  13. Implementing a Dynamic Database-Driven Course Using LAMP

    Science.gov (United States)

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  14. Centralized database for interconnection system design. [for spacecraft

    Science.gov (United States)

    Billitti, Joseph W.

    1989-01-01

    A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.

  15. PathwayAccess: CellDesigner plugins for pathway databases.

    Science.gov (United States)

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  16. INNOVATIVE DATABASE MANAGEMENT SYSTEM OF DESIGN WORKS

    Directory of Open Access Journals (Sweden)

    Albina Ahmetova

    2012-01-01

    Full Text Available The article discusses the definition of a professional thinking in design, on which the formation, the embodiment of the project and implementation of the ideas of the customer is based. The article is proposed to teach students the proper approach to the appearance and presentation of the professional and educational works.

  17. Concepts and tools for the design of semantical databases

    CERN Document Server

    Meersman, Robert A

    1991-01-01

    The design and implementation of modern more "semantical" databases involves the use of high-level conceptual abstraction mechanisms and methodologies. An illustration of this process is given using the NIAM method and notation (lecture 1), its transformation into relational database with triggers (e.g. using SYBASE0 (lecture 2) and a study of the requirements for suitable tools (RIDL*) and their extension and applicability for e.g. object-oriented databases. A case study defined by a complex database for document handling will be used as example (lecture 3).

  18. MPD3: a useful medicinal plants database for drug designing.

    Science.gov (United States)

    Mumtaz, Arooj; Ashfaq, Usman Ali; Ul Qamar, Muhammad Tahir; Anwar, Farooq; Gulzar, Faisal; Ali, Muhammad Amjad; Saari, Nazamid; Pervez, Muhammad Tariq

    2017-06-01

    Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .

  19. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  20. Design and implementation of the ITPA confinement profile database

    Energy Technology Data Exchange (ETDEWEB)

    Walters, Malcolm E-mail: malcolm.walters@ukaea.org.uk; Roach, Colin

    2004-06-01

    One key goal of the fusion program is to improve the accuracy of physics models in describing existing experiments, so as to make better predictions of the performance of future fusion devices. To support this goal, databases of experimental results from multiple machines have been assembled to facilitate the testing of physics models over a wide range of operating conditions and plasma parameters. One such database was the International Multi-Tokamak Profile Database. This database has more recently been substantially revamped to exploit newer technologies, and is now known as the ITPA confinement profile database http://www.tokamak-profiledb.ukaea.org.uk. The overall design of the updated system will be outlined and the implementation of the relational database part will be described in detail.

  1. Design of SMART alarm system using main memory database

    International Nuclear Information System (INIS)

    Jang, Kue Sook; Seo, Yong Seok; Park, Keun Oak; Lee, Jong Bok; Kim, Dong Hoon

    2001-01-01

    To achieve design goal of SMART alarm system, first of all we have to decide on how to handle and manage alarm information and how to use database. So this paper analyses concepts and deficiencies of main memory database applied in real time system. And this paper sets up structure and processing principles of main memory database using nonvolatile memory such as flash memory and develops recovery strategy and process board structures using these. Therefore this paper shows design of SMART alarm system is suited functions and requirements

  2. Analysis/design of tensile property database system

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, D. H.; Jeon, I.; Lyu, W. S.

    2001-01-01

    The data base construction using the data produced from tensile experiment can increase the application of test results. Also, we can get the basic data ease from database when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, the analysis and design was performed to develop the database for tensile extension property

  3. DESIGN AND CONSTRUCTION OF A FOREST SPATIAL DATABASE: AN APPLICATION

    Directory of Open Access Journals (Sweden)

    Turan Sönmez

    2006-11-01

    Full Text Available General Directorate of Forests (GDF has not yet created the spatial forest database to manage forest and catch the developed countries in forestry. The lack of spatial forest database results in collection of the spatial data redundancy, communication problems among the forestry organizations. Also it causes Turkish forestry to be backward of informatics’ era. To solve these problems; GDF should establish spatial forest database supported Geographic Information System (GIS. To design the spatial database, supported GIS, which provides accurate, on time and current data/info for decision makers and operators in forestry, and to develop sample interface program to apply and monitor classical forest management plans is paramount in contemporary forest management planning process. This research is composed of three major stages: (i spatial rototype database design considering required by the three hierarchical organizations of GDF (regional directorate of forests, forest enterprise, and territorial division, (ii user interface program developed to apply and monitor classical management plans based on the designed database, (iii the implementation of the designed database and its user interface in Artvin Central Planning Unit.

  4. A Components Database Design and Implementation for Accelerators and Detectors

    International Nuclear Information System (INIS)

    Chan, A.; Meyer, S.

    2011-01-01

    Many accelerator and detector systems being fabricated for the PEP-II Accelerator and BABAR Detector needed configuration control and calibration measurements tracked for their components. Instead of building a database for each distinct system, a Components Database was designed and implemented that can encompass any type of component and any type of measurement. In this paper we describe this database design that is especially suited for the engineering and fabrication processes of the accelerator and detector environments where there are thousands of unique component types. We give examples of information stored in the Components Database, which includes accelerator configuration, calibration measurements, fabrication history, design specifications, inventory, etc. The World Wide Web interface is used to access the data, and templates are available for international collaborations to collect data off-line.

  5. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    Science.gov (United States)

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  6. Professional Microsoft SQL Server 2012 Administration

    CERN Document Server

    Jorgensen, Adam; LoForte, Ross; Knight, Brian

    2012-01-01

    An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s

  7. RNAiFold 2.0: a web server and software to design custom and Rfam-based RNA molecules.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Dotu, Ivan; Clote, Peter

    2015-07-01

    Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. the web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  9. Teaching Database Design with Constraint-Based Tutors

    Science.gov (United States)

    Mitrovic, Antonija; Suraweera, Pramuditha

    2016-01-01

    Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…

  10. Property Modelling and Databases in Product-Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Sansonetti, Sascha

    of the PC-SAFT is used. The developed database and property prediction models have been combined into a properties-software that allows different product-process design related applications. The presentation will also briefly highlight applications of the software for virtual product-process design...

  11. One approach to design of speech emotion database

    Science.gov (United States)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  12. Design and realization of reports of database in VC++. net

    International Nuclear Information System (INIS)

    Zhu Haijun; Shen Liren; Liu Dekang

    2006-01-01

    The design and realization of reports of database on the basis of VC ++ . net is presented. In the first, report template using word format files is introduced, and the method of filling the data table up according to the database is expatiated. The function of preview and printing calling word software is analyzed. The key code of how to generate reports automatically with Visual C ++ . net is given. (authors)

  13. ProSeeK: a web server for MLPA probe design.

    Science.gov (United States)

    Pantano, Lorena; Armengol, Lluís; Villatoro, Sergi; Estivill, Xavier

    2008-11-28

    The technological evolution of platforms for detecting genome-wide copy number imbalances has allowed the discovery of an unexpected amount of human sequence that is variable in copy number among individuals. This type of human variation can make an important contribution to human diversity and disease susceptibility. Multiplex Ligation-dependent Probe Amplification (MLPA) is a targeted method to assess copy number differences for up to 40 genomic loci in one single experiment. Although specific MLPA assays can be ordered from MRC-Holland (the proprietary company of the MLPA technology), custom designs are also developed in many laboratories worldwide. After our own experience, an important drawback of custom MLPA assays is the time spent during the design of the specific oligonucleotides that are used as probes. Due to the large number of probes included in a single assay, a number of restrictions need to be met in order to maximize specificity and to increase success likelihood. We have developed a web tool for facilitating and optimising custom probe design for MLPA experiments. The algorithm only requires the target sequence in FASTA format and a set of parameters, that are provided by the user according to each specific MLPA assay, to identify the best probes inside the given region. To our knowledge, this is the first available tool for optimizing custom probe design of MLPA assays. The ease-of-use and speed of the algorithm dramatically reduces the turn around time of probe design. ProSeeK will become a useful tool for all laboratories that are currently using MLPA in their research projects for CNV studies.

  14. ProSeeK: A web server for MLPA probe design

    Directory of Open Access Journals (Sweden)

    Villatoro Sergi

    2008-11-01

    Full Text Available Abstract Background The technological evolution of platforms for detecting genome-wide copy number imbalances has allowed the discovery of an unexpected amount of human sequence that is variable in copy number among individuals. This type of human variation can make an important contribution to human diversity and disease susceptibility. Multiplex Ligation-dependent Probe Amplification (MLPA is a targeted method to assess copy number differences for up to 40 genomic loci in one single experiment. Although specific MLPA assays can be ordered from MRC-Holland (the proprietary company of the MLPA technology, custom designs are also developed in many laboratories worldwide. After our own experience, an important drawback of custom MLPA assays is the time spent during the design of the specific oligonucleotides that are used as probes. Due to the large number of probes included in a single assay, a number of restrictions need to be met in order to maximize specificity and to increase success likelihood. Results We have developed a web tool for facilitating and optimising custom probe design for MLPA experiments. The algorithm only requires the target sequence in FASTA format and a set of parameters, that are provided by the user according to each specific MLPA assay, to identify the best probes inside the given region. Conclusion To our knowledge, this is the first available tool for optimizing custom probe design of MLPA assays. The ease-of-use and speed of the algorithm dramatically reduces the turn around time of probe design. ProSeeK will become a useful tool for all laboratories that are currently using MLPA in their research projects for CNV studies.

  15. Virtual materials design using databases of calculated materials properties

    International Nuclear Information System (INIS)

    Munter, T R; Landis, D D; Abild-Pedersen, F; Jones, G; Wang, S; Bligaard, T

    2009-01-01

    Materials design is most commonly carried out by experimental trial and error techniques. Current trends indicate that the increased complexity of newly developed materials, the exponential growth of the available computational power, and the constantly improving algorithms for solving the electronic structure problem, will continue to increase the relative importance of computational methods in the design of new materials. One possibility for utilizing electronic structure theory in the design of new materials is to create large databases of materials properties, and subsequently screen these for new potential candidates satisfying given design criteria. We utilize a database of more than 81 000 electronic structure calculations. This alloy database is combined with other published materials properties to form the foundation of a virtual materials design framework (VMDF). The VMDF offers a flexible collection of materials databases, filters, analysis tools and visualization methods, which are particularly useful in the design of new functional materials and surface structures. The applicability of the VMDF is illustrated by two examples. One is the determination of the Pareto-optimal set of binary alloy methanation catalysts with respect to catalytic activity and alloy stability; the other is the search for new alloy mercury absorbers.

  16. A phenome database (NEAUHLFPD) designed and constructed for broiler lines divergently selected for abdominal fat content.

    Science.gov (United States)

    Li, Min; Dong, Xiang-yu; Liang, Hao; Leng, Li; Zhang, Hui; Wang, Shou-zhi; Li, Hui; Du, Zhi-Qiang

    2017-05-20

    Effective management and analysis of precisely recorded phenotypic traits are important components of the selection and breeding of superior livestocks. Over two decades, we divergently selected chicken lines for abdominal fat content at Northeast Agricultural University (Northeast Agricultural University High and Low Fat, NEAUHLF), and collected large volume of phenotypic data related to the investigation on molecular genetic basis of adipose tissue deposition in broilers. To effectively and systematically store, manage and analyze phenotypic data, we built the NEAUHLF Phenome Database (NEAUHLFPD). NEAUHLFPD included the following phenotypic records: pedigree (generations 1-19) and 29 phenotypes, such as body sizes and weights, carcass traits and their corresponding rates. The design and construction strategy of NEAUHLFPD were executed as follows: (1) Framework design. We used Apache as our web server, MySQL and Navicat as database management tools, and PHP as the HTML-embedded language to create dynamic interactive website. (2) Structural components. On the main interface, detailed introduction on the composition, function, and the index buttons of the basic structure of the database could be found. The functional modules of NEAUHLFPD had two main components: the first module referred to the physical storage space for phenotypic data, in which functional manipulation on data can be realized, such as data indexing, filtering, range-setting, searching, etc.; the second module related to the calculation of basic descriptive statistics, where data filtered from the database can be used for the computation of basic statistical parameters and the simultaneous conditional sorting. NEAUHLFPD could be used to effectively store and manage not only phenotypic, but also genotypic and genomics data, which can facilitate further investigation on the molecular genetic basis of chicken adipose tissue growth and development, and expedite the selection and breeding of broilers

  17. Design and implementation of the NPOI database and website

    Science.gov (United States)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  18. The HydroServer Platform for Sharing Hydrologic Data

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  19. Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF

    Directory of Open Access Journals (Sweden)

    Vanessa Stangherlin Machado Paixão-Cortes

    2018-01-01

    Full Text Available Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C software requirements document and prototype: assessment results guiding the key features that wCReF must

  20. Use of Software Tools in Teaching Relational Database Design.

    Science.gov (United States)

    McIntyre, D. R.; And Others

    1995-01-01

    Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)

  1. A Database Design and Development Case: Home Theater Video

    Science.gov (United States)

    Ballenger, Robert; Pratt, Renee

    2012-01-01

    This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…

  2. The Design of Lexical Database for Indonesian Language

    Science.gov (United States)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  3. Getting started with SQL Server 2014 administration

    CERN Document Server

    Ellis, Gethyn

    2014-01-01

    This is an easytofollow handson tutorial that includes real world examples of SQL Server 2014's new features. Each chapter is explained in a stepbystep manner which guides you to implement the new technology.If you want to create an highly efficient database server then this book is for you. This book is for database professionals and system administrators who want to use the added features of SQL Server 2014 to create a hybrid environment, which is both highly available and allows you to get the best performance from your databases.

  4. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  5. Database basic design for safe management radioactive waste

    International Nuclear Information System (INIS)

    Son, D. C.; Ahn, K. I.; Jung, D. J.; Cho, Y. B.

    2003-01-01

    As the amount of radioactive waste and related information to be managed are increasing, some organizations are trying or planning to computerize the management on radioactive waste. When we consider that information on safe management of radioactive waste should be used in association with national radioactive waste management project, standardization of data form and its protocol is required, Korea Institute of Nuclear Safety(KINS) will establish and operate nationwide integrated database in order to effectively manage a large amount of information on national radioactive waste. This database allows not only to trace and manage the trend of radioactive waste occurrence and in storage but also to produce reliable analysis results for the quantity accumulated. Consequently, we can provide necessary information for national radioactive waste management policy and related industry's planing. This study explains the database design which is the essential element for information management

  6. Design and implementation of the CEBAF element database

    International Nuclear Information System (INIS)

    Larrieu, T.; Joyce, M.; Slominski, C.

    2012-01-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous. (authors)

  7. Mastering Citrix XenServer

    CERN Document Server

    Reed, Martez

    2014-01-01

    If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.

  8. Design and Analysis of an Enhanced Patient-Server Mutual Authentication Protocol for Telecare Medical Information System.

    Science.gov (United States)

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S

    2015-11-01

    In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.

  9. Exam 70-411 administering Windows Server 2012

    CERN Document Server

    Course, Microsoft Official Academic

    2014-01-01

    Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of  a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program

  10. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  11. Designing and Testing a Database for the Qweak Measurement

    Science.gov (United States)

    Holcomb, Edward; Spayde, Damon; Pote, Tim

    2009-05-01

    The aim of the Qweak experiment is to make the most precise determination to date, aside from measurements at the Z-pole, of the Weinberg angle via a measurement of the proton's weak charge. The weak charge determines a particle's interaction with Z-type bosons. According to the Standard Model the value of the angle depends on the momentum of the exchanged Z boson and is well-determined. Deviations from the Standard Model would indicate new physics. During Qweak, bundles of longitudinally polarized electrons will be scattered from a proton target. Elastically scattered electrons will be detected in one of eight quartz bars via the emitted Cerenkov radiation. Periodically the helicity of these electrons will be reversed. The difference in the scattering rates of these two helicity states creates an asymmetry; the Weinberg angle can be calculated from this. Our role in the collaboration was the design, creation, and implementation of a database for the Qweak experiment. The purpose of this database is to store pertinent information, such as detector asymmetries and monitor calibrations, for later access. In my talk I plan to discuss the database design and the results of various tests.

  12. Design and Implementation of the CEBAF Element Database

    International Nuclear Information System (INIS)

    Larrieu, Theodore; Slominski, Christopher; Joyce, Michele

    2011-01-01

    With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.

  13. The design of a DataBase for Natural Resources in Danube Delta Biosphere Reserve DDBR

    Directory of Open Access Journals (Sweden)

    GRIGORAS Ion

    2016-12-01

    Full Text Available Efficient use of natural resources especially in Natura 2000 sites is an essential component of Europe 2020 strategy. The use of web database is absolutely necessary for a good resource management and it will provide a good communication channel for the main stakeholders: protected area manager, scientists, resources evaluators and local community. Access to information from database is allowed according with the user competence. General information on natural resources uses in Danube Delta Biosphere Reserve (D.D.B.R. will be freely available. Different degree of information, especially regarding editing data will be applied for the main actors involved in use of natural resources. Evaluators that are mainly scientists with good biodiversity background, protected area staff that applies the regulation regarding natural resources in relation with ecological conditions, private companies or persons interested in harvesting natural resources. The user interface is realized by using OpenSource products. The web interface for tabular data was build using ExtJs Javascrip library. The web map user interface was build using Openlayers, GeoExt, and Ext. For database SQL server we chose PostgresSQL and GeoServer for maps server

  14. Designing a Relational Database for the Basic School; Schools Command Web Enabled Officer and Enlisted Database (Sword)

    Science.gov (United States)

    2002-06-01

    some human intervention to ensure a good MOS assignment for each officer, the selection process is completed based on a numerical algorithm used to...Record 1. DESIGN VIEW 2. CODE <%@LANGUAGE="VBSCRIPT"%> <% ’ *** Logout the current user. MM_Logout = CStr (Request.ServerVariables...34URL")) & "?MM_Logoutnow=1" If ( CStr (Request("MM_Logoutnow")) = ŕ") Then Session.Abandon MM_logoutRedirectPage = "default.htm" ’ redirect

  15. Database on wind characteristics - Analyses of wind turbine design loads

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  16. Reusing open data for learning database design through project development

    Directory of Open Access Journals (Sweden)

    Jose-Norberto MAZÓN

    2015-12-01

    Full Text Available This paper describes a novel methodology based on reusing open data for applying project-based learning in a Database Design subject of a university degree. This methodology is applied to the ARA (Alto Rendimiento Académico or High Academic Performance group taught in the degree in Computer Engineering at the University of Alicante (Spain during 2012/2013, 2013/2014, and 2014/2015. Openness philosophy implies that huge amount of data is available to students in tabular format, ready for reusing. In our teaching experience, students propose an original scenario where different open data can be reused to a specific goal. Then, it is proposed to design a database in order to manage this data in the envisioned scenario. Open data in the subject helps in instilling a creative and entrepreneur attitude in students, as well as encourages autonomous and lifelong learning. Surveys made to students at the end of each year shown that reusing open data within project-based learning methodologies makes more motivated students since they are using real data.

  17. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  18. SAS Enterprise Data Integration Server - A Complete Solution Designed To Meet the Full Spectrum of Enterprise Data Integration Needs

    Directory of Open Access Journals (Sweden)

    Silvia BOLOHAN

    2012-05-01

    Full Text Available This paper is about why is Data Integration important for organisations around the world. Organizations struggle daily with the challenges of large distributed data volumes, inconsistently defined data across disparate systems and the high expectations of data consumers who depend on information to be correct, complete and available when they need it. SAS Enterprise Data Integration Server provides a comprehensive solution that enables organizations to solve these challenges in a timely, cost-effective manner with the ability to efficiently manage data integration projects on an enterprise scale, increasing overall productivity and reducing the total cost of ownership.

  19. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  20. Design and Implementation of English for Academic Purpose Online Learning System Based on Browser/Server Framework

    Directory of Open Access Journals (Sweden)

    Yan Gong

    2018-03-01

    Full Text Available Today, with the rapid development of the information age, the education reform tends to be internationalized. The tertiary-level EFL education in colleges and universities has also changed its original model with focuses on cultivating gen-eral-purpose linguistic skills to one on students' English for Academic Purpose (EAP. EAP English instruction has been vigorously popularized in research-based universities. To achieve the informationized and standardized management for EAP English instruction work in the universities, in this paper, we design and develop a EAP English online learning system with B / S as the system develop-ment framework by which the system's overall functions are designed. MySQL is chosen as a database development tool used to implement the main object mod-ule, while JSP technology is used to support the cross-platform mechanism in order to access to diversified data sources. It is proved by the test on system op-eration that this system features operability, easy to use and maintain, and enables to meet the needs of university students for EAP English learning and teaching management, improves the students’ EAP English learning model and efficiency.

  1. GeoServer cookbook

    CERN Document Server

    Iacovella, Stefano

    2014-01-01

    This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.

  2. PostgreSQL server programming

    CERN Document Server

    Krosing, Hannu

    2013-01-01

    This practical guide leads you through numerous aspects of working with PostgreSQL. Step by step examples allow you to easily set up and extend PostgreSQL. ""PostgreSQL Server Programming"" is for moderate to advanced PostgreSQL database professionals. To get the best understanding of this book, you should have general experience in writing SQL, a basic idea of query tuning, and some coding experience in a language of your choice.

  3. BUSINESS MODELLING AND DATABASE DESIGN IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Mihai-Constantin AVORNICULUI

    2015-04-01

    Full Text Available Electronic commerce is growing constantly from one year to another in the last decade, few are the areas that also register such a growth. It covers the exchanges of computerized data, but also electronic messaging, linear data banks and electronic transfer payment. Cloud computing, a relatively new concept and term, is a model of access services via the internet to distributed systems of configurable calculus resources at request which can be made available quickly with minimum management effort and intervention from the client and the provider. Behind an electronic commerce system in cloud there is a data base which contains the necessary information for the transactions in the system. Using business modelling, we get many benefits, which makes the design of the database used by electronic commerce systems in cloud considerably easier.

  4. Mastering Lync Server 2010

    CERN Document Server

    Winters, Nathan

    2012-01-01

    An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on

  5. Mastering Microsoft Windows Small Business Server 2008

    CERN Document Server

    Johnson, Steven

    2010-01-01

    A complete, winning approach to the number one small business solution. Do you have 75 or fewer users or devices on your small-business network? Find out how to integrate everything you need for your mini-enterprise with Microsoft's new Windows Server 2008 Small Business Server, a custom collection of server and management technologies designed to help small operations run smoothly without a giant IT department. This comprehensive guide shows you how to master all SBS components as well as handle integration with other Microsoft technologies.: Focuses on Windows Server 2008 Small Business Serv

  6. Microsoft Windows Server 2012 administration instant reference

    CERN Document Server

    Hester, Matthew

    2013-01-01

    Fast, accurate answers for common Windows Server questions Serving as a perfect companion to all Windows Server books, this reference provides you with quick and easily searchable solutions to day-to-day challenges of Microsoft's newest version of Windows Server. Using helpful design features such as thumb tabs, tables of contents, and special heading treatments, this resource boasts a smooth and seamless approach to finding information. Plus, quick-reference tables and lists provide additional on-the-spot answers. Covers such key topics as server roles and functionality, u

  7. Maintaining the Database for Information Object Analysis, Intent, Dissemination and Enhancement (IOAIDE) and the US Army Research Laboratory Campus Sensor Network (ARL CSN)

    Science.gov (United States)

    2017-01-01

    operations as well as basic knowledge of Microsoft Structured Query Language Server Management Studio (2014 or 2016). 15. SUBJECT TERMS Microsoft SQL ...designed and is maintained with Microsoft SQL Server Management Studio. The basic requirements for the IOAIDE/ARL CSN database development and... SQL server (2014 or 2016) installed. All images in this report were generated using Windows 10. The IOAIDE/ARL CSN database could reside on the

  8. Optimizing queries in SQL Server 2008

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2010-05-01

    Full Text Available Starting from the need to develop efficient IT systems, we intend to review theoptimization methods and tools that can be used by SQL Server database administratorsand developers of applications based on Microsoft technology, focusing on the latestversion of the proprietary DBMS, SQL Server 2008. We’ll reflect on the objectives tobe considered in improving the performance of SQL Server instances, we will tackle themostly used techniques for analyzing and optimizing queries and we will describe the“Optimize for ad hoc workloads”, “Plan Freezing” and “Optimize for unknown" newoptions, accompanied by relevant code examples.

  9. LECTINPred: web Server that Uses Complex Networks of Protein Structure for Prediction of Lectins with Potential Use as Cancer Biomarkers or in Parasite Vaccine Design.

    Science.gov (United States)

    Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto

    2014-04-01

    Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. The Development of Mobile Server for Language Courses

    OpenAIRE

    Tokumoto, Hiroko; Yoshida, Mitsunobu

    2009-01-01

    The aim of this paper is to introduce the conceptual design of the mobile server software "MY Server" for language teaching drafted by Tokumoto. It is to report how this software is designed and adopted effectively to Japanese language teaching. Most of the current server systems for education require facilities in a big scale including high-spec server machines, professional administrators, which naturally result in big budget projects that individual teachers or small size schools canno...

  11. Design and management of database using microsoft access program: application in neurointerventional unit

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seon Moon; Jeong, Gyeong Un; Kim, Tae Il; Cha, Ji Hyeon; Pyun, Hae Wook; Woo, Ryu Chang; Kim, Ho Sung; Suh, Dae Chul [University of Ulsan, College of Medicine, Seoul (Korea, Republic of)

    2005-10-15

    Complex clinical information in cerebral angiointervention unit requires effective management of statistical analysis for the classification of diagnosis and intervention including follow-up data from the interventional treatment. We present an application of Microsoft Access program for the management of patient data in cerebral angiointervention unit which suggests practical methods in recording and analyzing the patient data. Since January 2002, patient information from cerebral angiointervention was managed by a database with over 4000 patients. We designed a program which incorporates size items; Table, Query, Form, Report, Page and Macro. Patient data, follow-up data and information regarding diagnosis and intervention were established in the Form section, related by serial number, and connected to each other to an independent Table. Problems in running the program were corrected by establishing Entity Relationship (ER) diagnosis of Tables to define relationships between Tables. Convenient Queries, Forms and Reports were created to display expected information were applied from selected Tables. The relationship program which incorporated six items conveniently provided the number of cases per year, incidence of disease, lesion site, and case analysis based on interventional treatment. We were able to follow the patients after the interventional procedures by creating queries and reports. Lists of disease and patients files were identified easily each time by the Macro function. In addition, product names, size and characteristics of materials used were indexed and easily available. Microsoft Access program is effective in the management of patient data in cerebral angiointervention unit. Accumulation of large amounts of complex data handled by multiple users may require client/sever solutions such as a Microsoft SQL Server.

  12. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  13. Personalized Pseudonyms for Servers in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiao Qiuyu

    2017-10-01

    Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.

  14. BEAM web server: a tool for structural RNA motif discovery.

    Science.gov (United States)

    Pietrosanto, Marco; Adinolfi, Marta; Casula, Riccardo; Ausiello, Gabriele; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2018-03-15

    RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. marco.pietrosanto@uniroma2.it. Supplementary data are available at Bioinformatics online.

  15. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  16. Server virtualization solutions

    OpenAIRE

    Jonasts, Gusts

    2012-01-01

    Currently in the information technology sector that is responsible for a server infrastructure is a huge development in the field of server virtualization on x86 computer architecture. As a prerequisite for such a virtualization development is growth in server productivity and underutilization of available computing power. Several companies in the market are working on two virtualization architectures – hypervizor and hosting. In this paper several of virtualization products that use host...

  17. Disk Storage Server

    CERN Multimedia

    This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.

  18. Group-Server Queues

    OpenAIRE

    Li, Quan-Lin; Ma, Jing-Yu; Xie, Mingzhou; Xia, Li

    2017-01-01

    By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times ...

  19. Designing a story database for use in automatic story generation

    NARCIS (Netherlands)

    Oinonen, Katri; Theune, Mariët; Nijholt, Anton; Uijlings, Jasper; Harper, Richard; Rauterberg, Matthias; Combetto, Marco

    In this paper we propose a model for the representation of stories in a story database. The use of such a database will enable computational story generation systems to learn from previous stories and associated user feedback, in order to create believable stories with dramatic plots that invoke an

  20. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  1. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  2. A European Drought Reference Database: Design and Online Implementation

    NARCIS (Netherlands)

    Stagge, J.H.; Tallaksen, L.M.; Kohn, I.; Stahl, K.; Loon, van A.

    2013-01-01

    This report presents the structure and status of the online European Drought Reference (EDR) database. This website provides detailed historical information regarding major historical European drought events. Each drought event is summarized using climatological drought indices, hydrological drought

  3. Computer Aided Design for Soil Classification Relational Database ...

    African Journals Online (AJOL)

    unique firstlady

    engineering, several developers were asked what rules they applied to identify ... classification is actually a part of all good science. As Michalski ... by a large number of soil scientists. .... and use. The calculus relational database processing is.

  4. A Yeast Artificial Chromosome Library Database: Design Considerations

    OpenAIRE

    Frisse, Mark E.; Ge, NengJie; Langenbacher, JulieM.; Kahn, Michael G.; Brownstein, Bernard H.

    1990-01-01

    This paper first describes a simple collection of HyperCard stacks created and used by genetics researchers to catalog information in a human yeast artificial chromosome (YAC) library. Although an intuitive human-computer interface made the HyperCard program easy to use, the program was neither an efficient nor a secure primary database for vital laboratory data. This paper subsequently describes a relational database implementation prototype that overcomes HyperCard's deficiencies as a datab...

  5. Personalized Pseudonyms for Servers in the Cloud

    OpenAIRE

    Xiao Qiuyu; Reiter Michael K.; Zhang Yinqian

    2017-01-01

    A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”), ...

  6. Microsoft® SQL Server® 2008 Analysis Services Step by Step

    CERN Document Server

    Cameron, Scott

    2009-01-01

    Teach yourself to use SQL Server 2008 Analysis Services for business intelligence-one step at a time. You'll start by building your understanding of the business intelligence platform enabled by SQL Server and the Microsoft Office System, highlighting the role of Analysis Services. Then, you'll create a simple multidimensional OLAP cube and progressively add features to help improve, secure, deploy, and maintain an Analysis Services database. You'll explore core Analysis Services 2008 features and capabilities, including dimension, cube, and aggregation design wizards; a new attribute relatio

  7. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  8. A Framework for Mapping User-Designed Forms to Relational Databases

    Science.gov (United States)

    Khare, Ritu

    2011-01-01

    In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…

  9. Design of multi-tiered database application based on CORBA component in SDUV-FEL system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin

    2004-01-01

    The drawback of usual two-tiered database architecture was analyzed and the Shanghai Deep Ultraviolet-Free Electron Laser database system under development was discussed. A project for realizing the multi-tiered database architecture based on common object request broker architecture (CORBA) component and middleware model constructed by C++ was presented. A magnet database was given to exhibit the design of the CORBA component. (authors)

  10. Computer Aided Design for Soil Classification Relational Database ...

    African Journals Online (AJOL)

    The paper focuses on the problems associated with classification, storage and retrieval of information on soil data, such as the incompatibility of soil data semantics; inadequate documentation, and lack of indexing; hence it is pretty difficult to efficiently access large database. Consequently, information on soil is very difficult ...

  11. Use of SQL Databases to Support Human Resource Management

    OpenAIRE

    Zeman, Jan

    2011-01-01

    Bakalářská práce se zaměřuje na návrh SQL databáze pro podporu Řízení lidských zdrojů a její následné vytvoření v programu MS SQL Server. This thesis focuses on the design of SQL database for support Human resources management and its creation in MS SQL Server. A

  12. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  13. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  14. Learning Zimbra Server essentials

    CERN Document Server

    Kouka, Abdelmonam

    2013-01-01

    A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.

  15. Creating a Data Warehouse using SQL Server

    DEFF Research Database (Denmark)

    Sørensen, Jens Otto; Alnor, Karl

    1999-01-01

    In this paper we construct a Star Join Schema and show how this schema can be created using the basic tools delivered with SQL Server 7.0. Major objectives are to keep the operational database unchanged so that data loading can be done with out disturbing the business logic of the operational...

  16. A database to enable discovery and design of piezoelectric materials

    Science.gov (United States)

    de Jong, Maarten; Chen, Wei; Geerlings, Henry; Asta, Mark; Persson, Kristin Aslaug

    2015-01-01

    Piezoelectric materials are used in numerous applications requiring a coupling between electrical fields and mechanical strain. Despite the technological importance of this class of materials, for only a small fraction of all inorganic compounds which display compatible crystallographic symmetry, has piezoelectricity been characterized experimentally or computationally. In this work we employ first-principles calculations based on density functional perturbation theory to compute the piezoelectric tensors for nearly a thousand compounds, thereby increasing the available data for this property by more than an order of magnitude. The results are compared to select experimental data to establish the accuracy of the calculated properties. The details of the calculations are also presented, along with a description of the format of the database developed to make these computational results publicly available. In addition, the ways in which the database can be accessed and applied in materials development efforts are described. PMID:26451252

  17. A database to enable discovery and design of piezoelectric materials.

    Science.gov (United States)

    de Jong, Maarten; Chen, Wei; Geerlings, Henry; Asta, Mark; Persson, Kristin Aslaug

    2015-01-01

    Piezoelectric materials are used in numerous applications requiring a coupling between electrical fields and mechanical strain. Despite the technological importance of this class of materials, for only a small fraction of all inorganic compounds which display compatible crystallographic symmetry, has piezoelectricity been characterized experimentally or computationally. In this work we employ first-principles calculations based on density functional perturbation theory to compute the piezoelectric tensors for nearly a thousand compounds, thereby increasing the available data for this property by more than an order of magnitude. The results are compared to select experimental data to establish the accuracy of the calculated properties. The details of the calculations are also presented, along with a description of the format of the database developed to make these computational results publicly available. In addition, the ways in which the database can be accessed and applied in materials development efforts are described.

  18. Structure and function design for nuclear facilities decommissioning information database

    International Nuclear Information System (INIS)

    Liu Yongkuo; Song Yi; Wu Xiaotian; Liu Zhen

    2014-01-01

    The decommissioning of nuclear facilities is a radioactive and high-risk project which has to consider the effect of radiation and nuclear waste disposal, so the information system of nuclear facilities decommissioning project must be established to ensure the safety of the project. In this study, by collecting the decommissioning activity data, the decommissioning database was established, and based on the database, the decommissioning information database (DID) was developed. The DID can perform some basic operations, such as input, delete, modification and query of the decommissioning information data, and in accordance with processing characteristics of various types of information data, it can also perform information management with different function models. On this basis, analysis of the different information data will be done. The system is helpful for enhancing the management capability of the decommissioning process and optimizing the arrangements of the project, it also can reduce radiation dose of the workers, so the system is quite necessary for safe decommissioning of nuclear facilities. (authors)

  19. Server hardware trends

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.

  20. Locating Hidden Servers

    National Research Council Canada - National Science Library

    Oeverlier, Lasse; Syverson, Paul F

    2006-01-01

    .... Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship...

  1. Object-relational database design-exploiting object orientation at the ...

    African Journals Online (AJOL)

    This paper applies the object-relational database paradigm in the design of a Health Management Information System. The class design, mapping of object classes to relational tables, the representation of inheritance hierarchies, and the appropriate database schema are all examined. Keywords: object relational ...

  2. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    Science.gov (United States)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  3. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  4. Professional Microsoft SQL Server 2012 Integration Services

    CERN Document Server

    Knight, Brian; Moss, Jessica M; Davis, Mike; Rock, Chris

    2012-01-01

    An in-depth look at the radical changes to the newest release of SISS Microsoft SQL Server 2012 Integration Services (SISS) builds on the revolutionary database product suite first introduced in 2005. With this crucial resource, you will explore how this newest release serves as a powerful tool for performing extraction, transformation, and load operations (ETL). A team of SQL Server experts deciphers this complex topic and provides detailed coverage of the new features of the 2012 product release. In addition to technical updates and additions, the authors present you with a new set of SISS b

  5. Collaborative Design and a Web Based Database For Active Roofs

    NARCIS (Netherlands)

    Quanjel, E.M.C.J.; Otter, den A.F.H.J.; Zeiler, W.; Wamelink, H.; Prins, M.; Geraerdts, R.

    2009-01-01

    Lack of collaboration in design teams often results in a low mutual level of understanding about the design to produce. This is due to the following intertwined aspects: a lack of integration in the team, knowledge gaps between team members and failing flows of information and communication between

  6. Designing a future Conditions Database based on LHC experience

    CERN Document Server

    Formica, Andrea; The ATLAS collaboration; Gallas, Elizabeth; Govi, Giacomo; Lehmann Miotto, Giovanna; Pfeiffer, Andreas

    2015-01-01

    The ATLAS and CMS Conditions Database infrastructures have served each of the respective experiments well through LHC Run 1, providing efficient access to a wide variety of conditions information needed in online data taking and offline processing and analysis. During the long shutdown between Run 1 and Run 2, we have taken various measures to improve our systems for Run 2. In some cases, a drastic change was not possible because of the relatively short time scale to prepare for Run 2. In this process, and in the process of comparing to the systems used by other experiments, we realized that for Run 3, we should consider more fundamental changes and possibilities. We seek changes which would streamline conditions data management, improve monitoring tools, better integrate the use of metadata, incorporate analytics to better understand conditions usage, as well as investigate fundamental changes in the storage technology, which might be more efficient while minimizing maintenance of the data as well as simplif...

  7. Designing a future Conditions Database based on LHC experience

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; Formica, Andrea; Gallas, Elizabeth; Lehmann Miotto, Giovanna; Pfeiffer, A.; Govi, G.

    2015-01-01

    We describe a proposal for a new Conditions Database infrastructure that ATLAS and CMS (or other) experiments could use starting on the timescale of Run 3. This proposal is based on the experience that both experiments accumulated during Run 1. We will present the identified relevant data flows for conditions data and underline the common use cases that lead to a joint effort for the development of a new system. Conditions data are needed in any scientific experiment. It includes any ancillary data associated with primary data taking such as detector configuration, state or calibration or the environment in which the detector is operating. In any non-trivial experiment, conditions data typically reside outside the primary data store for various reasons (size, complexity or availability) and are best accessed at the point of processing or analysis (including for Monte Carlo simulations). The ability of any experiment to produce correct and timely results depends on the complete and efficient availability of ne...

  8. Database and modeling assessments of the CANDU 3, PIUS, ALMR, and MHTGR designs

    International Nuclear Information System (INIS)

    Carlson, D.E.; Meyer, R.O.

    1994-01-01

    As part of the research program to support the preapplication reviews of the CANDU 3, PIUS, ALMR, and MHTGR designs, the NRC has completed preliminary assessments of databases and modeling capabilities. To ensure full coverage of all four designs, a detailed assessment methodology was developed that follows the broad logic of the NRC's Code Scaling, Applicability, and Uncertainty (CSAU) methodology. This paper describes the methodology of the database assessments and presents examples of the assessment process using preliminary results for the ALMR design

  9. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  10. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  11. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    Science.gov (United States)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  12. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  13. Windows Terminal Servers Orchestration

    Science.gov (United States)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  14. NExT server

    CERN Document Server

    1989-01-01

    The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.

  15. Design and implementation of a biomedical image database (BDIM).

    Science.gov (United States)

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Design & Implementation of Company Database for MME Subcontracting Unit

    CERN Document Server

    Horvath, Benedek

    2016-01-01

    The purpose of this document is to introduce the software stack designed and implemented by me, during my student project. The report includes both the project description, the requirements set against the solution, the already existing alternatives for solving the problem, and the final solution that has been implemented. Reading this document you may have a better understanding of what I was working on for eleven weeks in the summer of 2016.

  17. The design and implementation of embedded database in HIRFL-CSR

    International Nuclear Information System (INIS)

    Xu Yang; Liu Wufeng; Long Yindong; Chinese Academy of Sciences, Beijing; Qiao Weimin; Guo Yuhui

    2008-01-01

    This article introduces design and implementation of embedded database for control system in HIRFL-CSR. The control system has three level database for centralized-manage and distributed control. Level I and level II are based on Windows, connecting with each other by ODBC of Oracle database. Level III according to the control system is based on Embedded-Linux to communicate with others by SQLite database engine. The overall control room sends wave-data, event-table and other data to front embedded database in advance, the embedded database SQLite relay the data to wave generator DSP while experimentation. On the control of synchronization trigger, the DSP generates wave-data for the control of power supply and magnetic field. (authors)

  18. Professional SQL Server 2005 administration

    CERN Document Server

    Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong

    2007-01-01

    SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin

  19. Database on wind characteristics - Analyses of wind turbine design loads

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2004-01-01

    the design load cases with relevance for to wind turbine structures. The present report constitutes thesecond part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment...... and Energy, Danish Energy Agency, The NetherlandsAgency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America....

  20. Web Based Database Processing for Turkish Navy Officers in USA

    National Research Council Canada - National Science Library

    Ozkan, Gokhan

    2002-01-01

    ...) and details the supporting web server and database server choices, It then presents a prototype of a web-based database system to speed and simplify tracking of academic and personal information...

  1. Teaching Databases at Southampton University

    OpenAIRE

    Thomas, Ken

    2003-01-01

    In this paper, we describe some of the issues faced when designing a database systems course that will be a compulsory component for second year undergraduates in computer science. The main goal is to give an overview of database systems starting from Codd’s classical paper through to practical implementation using a SQL server (MySQL) For conceptual modelling, we chose UML because of the prior knowledge of the target class. The logical model is derived from the conceptual model and we place ...

  2. Design and Establishment of Database on the AtoN Properties for AtoN Simulator

    Directory of Open Access Journals (Sweden)

    Ji-Min Yeo

    2015-12-01

    Full Text Available This study investigated the design and establishment of AtoN(Aids to Navigation in the AtoN Simulator. In recent years, IALA(International Association of Lighthouse Authorities has raised the need of a system which could verify whether AtoN is appropriate for the design of AtoN and placement planning. An AtoN Simulator provides simulation circumstances, including the topographical and environmental characteristics of a primary harbor and the characteristics of a navigating ship and the maritime traffic. In the present study, we have developed an AtoN simulator system, including the integrated system design and the construction of an AtoN database. The AtoN Manager was developed as a virtual AtoN to manage the AtoN simulation. However, the data and materials are difficult to manage. Hence, an AtoN Manager and an AtoN simulation system have been needed for an effective managing of the AtoN properties database data. For the database structure design, we have analyzed the database design methodology. The AtoN properties have been analyzed for the effective data management. The AtoN properties designed in the present paper were classified into 19 categories. The status and properties database of AtoN were installed in 14 major ports, those were applied to the AtoN Manager. Designing the AtoN properties database was defined to reduce confusion of the use of English terms and abbreviations. The terms and acronyms have been defined in the AtoN properties and designed to the properties database structure for each AtoN. Before structuring the database for the AtoN Manager, the data of the AtoN properties for each harbor have been organized in accordance with the Excel system. The regulated data were converted into the AtoN properties database in use for the AtoN Manager. The database based on the AtoN properties table structure to each AtoN was designed. The AtoN simulator was implemented by the AtoN Manager applied to the AtoN properties database.

  3. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    deploying the VM, we installed SQL Server 2014 relational database management software (RDBMS) and restored a copy of the PYTHON database onto the server ...management views within SQL Server , we retrieved lists of the most commonly executed queries, the percentage of reads versus writes, as well as...Monitor. This gave us data regarding resource utilization and queueing. The second tool we used was the SQL Server Profiler provided by Microsoft

  4. Design and evaluation of a NoSQL database for storing and querying RDF data

    Directory of Open Access Journals (Sweden)

    Kanda Runapongsa Saikaew

    2014-12-01

    Full Text Available Currently the amount of web data has increased excessively. Its metadata is widely used in order to fully exploit web information resources. This causes the need for Semantic Web technology to quickly analyze such big data. Resource Description Framework (RDF is a standard for describing web resources. In this paper, we propose a method to exploit a NoSQL database, specifically MongoDB, to store and query RDF data. We choose MongoDB to represent a NoSQL database because it is one of the most popular high-performance NoSQL databases. We evaluate the proposed design and implementation by using the Berlin SPARQL Benchmark, which is one of the most widely accepted benchmarks for comparing the performance of RDF storage systems. We compare three database systems, which are Apache Jena TDB (native RDF store, MySQL (relational database, and our proposed system with MongoDB (NoSQL database. Based on the experimental results analysis, our proposed system outperforms other database systems for most queries when the data set size is small. However, for a larger data set, MongoDB performs well for queries with simple operators while MySQL offers an efficient solution for complex queries. The result of this work can provide some guideline for choosing an appropriate RDF database system and applying a NoSQL database in storing and querying RDF data.

  5. Analysis And Database Design from Import And Export Reporting in Company in Indonesia

    Directory of Open Access Journals (Sweden)

    Novan Zulkarnain

    2016-03-01

    Full Text Available Director General of Customs and Excise (DJBC is a government agency that oversees exports and imports in Indonesia. Companies that receive exemption and tax returns are required to wipe Orkan  activities export and import by using IT-based reporting. This study aimed to analyze and design  databases to support the reporting of customs based report format for Director General of Customs and Excise No. PER-09/BC/2014. Data collection used was the Fact Finding Techniques consisted of studying the documents, interviews, observation, and literature study. The methods used for Design Database System is DB-SDLC (System Development Life Cycle Database, namely: Conceptual Design, Design Logical and Physical Design. The result obtained is ERD (Entity Relationship Diagram that can be used in the development of Customs Reporting System in companies throughout Indonesia. In conclusions, ERD has been able to meet all the reporting elements of customs.

  6. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  7. The design of wireless order system based on android

    Directory of Open Access Journals (Sweden)

    CHEN Xiaofeng

    2012-08-01

    Full Text Available In recent years, the tremendous development of the Android mobile phone has made the design of new wireless ordering system possible. This article discusses comprehensively from client side and server side the system's designing principles and the processes, including the launcher, a la carte, update, checkout and other modules in client side, as well as the web server, database server, web project, MFC project modules in server side. The actual testing and application show that this system has high reliability and practicality.

  8. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    Science.gov (United States)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  9. UNIX secure server : a free, secure, and functional server example

    OpenAIRE

    Sastre, Hugo

    2016-01-01

    The purpose of this thesis work was to introduce UNIX server as a personal server but also as a start point for investigation and developing at a professional level. The objective of this thesis was to build a secure server providing not only a FTP server but also an HTTP server and a cloud system for remote backups. OpenBSD was used as the operating system. OpenBSD is a UNIX-like operating system made by hackers for hackers. The difference with other systems that might partially provid...

  10. SQL Server 2012 reporting services blueprints

    CERN Document Server

    Ribunal, Marlon

    2013-01-01

    Follow the fictional John Kirkland through a series of real-world reporting challenges based on actual business conditions. Use his detailed blueprints to develop your own reports for every requirement.This book is for report developers, data analysts, and database administrators struggling to master the complex world of effective reporting in SQL Server 2012. Knowledge of how data sources and data sets work will greatly help readers to speed through the tutorials.

  11. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  12. Wireless Sensor Networks Database: Data Management and Implementation

    Directory of Open Access Journals (Sweden)

    Ping Liu

    2014-04-01

    Full Text Available As the core application of wireless sensor network technology, Data management and processing have become the research hotspot in the new database. This article studied mainly data management in wireless sensor networks, in connection with the characteristics of the data in wireless sensor networks, discussed wireless sensor network data query, integrating technology in-depth, proposed a mobile database structure based on wireless sensor network and carried out overall design and implementation for the data management system. In order to achieve the communication rules of above routing trees, network manager uses a simple maintenance algorithm of routing trees. Design ordinary node end, server end in mobile database at gathering nodes and mobile client end that can implement the system, focus on designing query manager, storage modules and synchronous module at server end in mobile database at gathering nodes.

  13. Design of special purpose database for credit cooperation bank business processing network system

    Science.gov (United States)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  14. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  15. Earlinet database: new design and new products for a wider use of aerosol lidar data

    Science.gov (United States)

    Mona, Lucia; D'Amico, Giuseppe; Amato, Francesco; Linné, Holger; Baars, Holger; Wandinger, Ulla; Pappalardo, Gelsomina

    2018-04-01

    The EARLINET database is facing a complete reshaping to meet the wide request for more intuitive products and to face the even wider request related to the new initiatives such as Copernicus, the European Earth observation programme. The new design has been carried out in continuity with the past, to take advantage from long-term database. In particular, the new structure will provide information suitable for synergy with other instruments, near real time (NRT) applications, validation and process studies and climate applications.

  16. Windows Home Server users guide

    CERN Document Server

    Edney, Andrew

    2008-01-01

    Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist

  17. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  18. Multi-dimensional database design and implementation of dam safety monitoring system

    Directory of Open Access Journals (Sweden)

    Zhao Erfeng

    2008-09-01

    Full Text Available To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.

  19. Proposal of SQL Database to Support the Activities in Small IT Company

    OpenAIRE

    Říha, Petr

    2012-01-01

    Bakalářská práce se zaměřuje na návrh SQL databáze pro podporu činností malé IT firmy a její vytvoření v programu MS SQL Server. This thesis focuses on the design of SQL databases to support the activities of a small IT company and its creation in MS SQL Server. B

  20. Getting started with SQL Server 2012 cube development

    CERN Document Server

    Lidberg, Simon

    2013-01-01

    As a practical tutorial for Analysis Services, get started with developing cubes. ""Getting Started with SQL Server 2012 Cube Development"" walks you through the basics, working with SSAS to build cubes and get them up and running.Written for SQL Server developers who have not previously worked with Analysis Services. It is assumed that you have experience with relational databases, but no prior knowledge of cube development is required. You need SQL Server 2012 in order to follow along with the exercises in this book.

  1. Developing of tensile property database system

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, D. H.; Jeon, J.; Ryu, W. S.

    2002-01-01

    The data base construction using the data produced from tensile experiment can increase the application of test results. Also, we can get the basic data ease from database when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, the tensile database system was developed by internet method using JSP(Java Server pages) tool

  2. The ASDEX Upgrade Parameter Server

    Energy Technology Data Exchange (ETDEWEB)

    Neu, Gregor, E-mail: gregor.neu@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Gräter, Alex [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Rapson, Christopher J.; Raupp, Gerhard; Treutterer, Wolfgang; Zasche, Dietrich; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-10-15

    Highlights: • We describe our main tool in the plasma control configuration process. • Parameter access and computation are configurable with XML files. • Simple implementation of in situ tests by rerouting requests to test data. • Pulse specific overriding of parameters. - Abstract: Concepts for the configuration of plant systems and plasma control of modern devices such as ITER and W7-X are based on global data structures, or “pulse schedules” or “experiment programs”, which specify all physics characteristics (waveforms for controlled actuators and plasma quantities) and all technical characteristics of the plant systems (diagnostics and actuators operation settings) for a planned pulse. At ASDEX Upgrade we use different approach. We observed that the physics characteristics driving the discharge control system (DCS) are frequently modified on a pulse-to-pulse basis. Plant system operation, however, relies on technical standard settings, or “basic configurations” to provide guaranteed resources or services, which evolve according to longer term session or campaign operation schedules. This is why AUG manages technical configuration items separately from physics items. Consistent computation of the DCS configuration requires access to all this physics and technical data, which include the discharge programme (DP), settings of actuator systems and real-time diagnostics, the current system state and a database of static parameters. A Parameter Server provides a unified view on all these parameter sets and acts as the central point of access. We describe the functionality and architecture of the Parameter Server and its embedding into the control environment.

  3. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  4. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  5. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  6. Windows Server® 2008 Inside Out

    CERN Document Server

    Stanek, William R

    2009-01-01

    Learn how to conquer Windows Server 2008-from the inside out! Designed for system administrators, this definitive resource features hundreds of timesaving solutions, expert insights, troubleshooting tips, and workarounds for administering Windows Server 2008-all in concise, fast-answer format. You will learn how to perform upgrades and migrations, automate deployments, implement security features, manage software updates and patches, administer users and accounts, manage Active Directory® directory services, and more. With INSIDE OUT, you'll discover the best and fastest ways to perform core a

  7. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  8. SwePep, a database designed for endogenous peptides and mass spectrometry.

    Science.gov (United States)

    Fälth, Maria; Sköld, Karl; Norrman, Mathias; Svensson, Marcus; Fenyö, David; Andren, Per E

    2006-06-01

    A new database, SwePep, specifically designed for endogenous peptides, has been constructed to significantly speed up the identification process from complex tissue samples utilizing mass spectrometry. In the identification process the experimental peptide masses are compared with the peptide masses stored in the database both with and without possible post-translational modifications. This intermediate identification step is fast and singles out peptides that are potential endogenous peptides and can later be confirmed with tandem mass spectrometry data. Successful applications of this methodology are presented. The SwePep database is a relational database developed using MySql and Java. The database contains 4180 annotated endogenous peptides from different tissues originating from 394 different species as well as 50 novel peptides from brain tissue identified in our laboratory. Information about the peptides, including mass, isoelectric point, sequence, and precursor protein, is also stored in the database. This new approach holds great potential for removing the bottleneck that occurs during the identification process in the field of peptidomics. The SwePep database is available to the public.

  9. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  10. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  11. SQL Server Integration Services

    CERN Document Server

    Hamilton, Bill

    2007-01-01

    SQL Server 2005 Integration Services (SSIS) lets you build high-performance data integration solutions. SSIS solutions wrap sophisticated workflows around tasks that extract, transform, and load (ETL) data from and to a wide variety of data sources. This Short Cut begins with an overview of key SSIS concepts, capabilities, standard workflow and ETL elements, the development environment, execution, deployment, and migration from Data Transformation Services (DTS). Next, you'll see how to apply the concepts you've learned through hands-on examples of common integration scenarios. Once you've

  12. Establishment of China Nuclear Fuel Assembly Database

    International Nuclear Information System (INIS)

    Chen Peng; Jin Yongli; Zhang Yingchao; Lu Huaquan; Chen Jianxin

    2009-01-01

    China Nuclear Fuel Assembly Database (CNFAD) is developed based on Oracle system. It contains the information of fuel assemblies in the stages of its design, fabrication and post irradiation (PIE). The structure of Browser Sever is adopted in the development of the software, which supports the HTTP protocol. It uses Java interface to transfer the codes from server to clients and make the sources of server and clients be utilized reasonably and sufficiently, so it can perform complicated tasks. Data in various stages of the fuel assemblies in Pressure Water Reactor (PWR), such as the design,fabrication, operation, and post irradiation examination, can be stored in this database. Data can be shared by multi users and communicated within long distances. By using CNFAD, the problem of decentralization of fuel data in China nuclear power plants will be solved. (authors)

  13. The PARIGA server for real time filtering and analysis of reciprocal BLAST results.

    Directory of Open Access Journals (Sweden)

    Massimiliano Orsini

    Full Text Available BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/, a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.

  14. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    Science.gov (United States)

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  15. RA radiological characterization database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    Radiological characterization of the RA research reactor is one of the main activities in the first two years of the reactor decommissioning project. The raw characterization data from direct measurements or laboratory analyses (defined within the existing sampling and measurement programme) have to be interpreted, organized and summarized in order to prepare the final characterization survey report. This report should be made so that the radiological condition of the entire site is completely and accurately shown with the radiological condition of the components clearly depicted. This paper presents an electronic database application, designed as a serviceable and efficient tool for characterization data storage, review and analysis, as well as for the reports generation. Relational database model was designed and the application is made by using Microsoft Access 2002 (SP1), a 32-bit RDBMS for the desktop and client/server database applications that run under Windows XP. (author)

  16. Sending servers to Morocco

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco.   Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...

  17. The IMPRESS DDT: a database design toolbox based on a formal specification language

    NARCIS (Netherlands)

    Flokstra, Jan; van Keulen, Maurice; Skowronek, Jacek; Skowronek, J.

    The Database Design Tool prototype is being developed in the IMPRESS project (Esprit project 6355). The IMPRESS project started in May 1992 and aims at creating a low-level storage manager tailored for multimedia applications, together with a library of efficient operators, a programming

  18. Design and implementation of Web-based SDUV-FEL engineering database system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin; Xie Dong

    2006-01-01

    A design of Web-based SDUV-FEL engineering database and its implementation are introduced. This system will save and offer static data and archived data of SDUV-FEL, and build a proper and effective platform for share of SDUV-FEL data. It offers usable and reliable SDUV-FEL data for operators and scientists. (authors)

  19. Design, Development, and Maintenance of the GLOBE Program Website and Database

    Science.gov (United States)

    Brummer, Renate; Matsumoto, Clifford

    2004-01-01

    This is a 1-year (Fy 03) proposal to design and develop enhancements, implement improved efficiency and reliability, and provide responsive maintenance for the operational GLOBE (Global Learning and Observations to Benefit the Environment) Program website and database. This proposal is renewable, with a 5% annual inflation factor providing an approximate cost for the out years.

  20. The design and development of RDGSM isotope database based on J2ME/J2EE

    International Nuclear Information System (INIS)

    Zhou Shumin; Gao Yongping; Sun Yamin

    2006-01-01

    RDGSM (the Regional Database for Geothermal Surface Manifestation) is a distributed database designed for IAEA which used to manage isotope and hydrology data in asian-pacific region. The data structure of RDGSM is introduced in the paper. An mobile database structure based on J2ME/J2EE is proposed. The design and development of RDGSM isotope database is demonstrated in detail. The data synchronization method, Model-View-Controller design model and data integrity constraint method are discussed to implement mobile database application. (authors)

  1. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    Science.gov (United States)

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  2. SQL Server 2012 data integration recipes solutions for integration services and other ETL tools

    CERN Document Server

    Aspin, Adam

    2012-01-01

    SQL Server 2012 Data Integration Recipes provides focused and practical solutions to real world problems of data integration. Need to import data into SQL Server from an outside source? Need to export data and send it to another system? SQL Server 2012 Data Integration Recipes has your back. You'll find solutions for importing from Microsoft Office data stores such as Excel and Access, from text files such as CSV files, from XML, from other database brands such as Oracle and MySQL, and even from other SQL Server databases. You'll learn techniques for managing metadata, transforming data to mee

  3. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  4. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    International Nuclear Information System (INIS)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea

    2016-01-01

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  5. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  6. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft

  7. Microsoft Windows Server Administration Essentials

    CERN Document Server

    Carpenter, Tom

    2011-01-01

    The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t

  8. Design and Implementation of Track Record System Based on Android Platform

    Directory of Open Access Journals (Sweden)

    Zhang Jiachen

    2017-01-01

    Full Text Available For the problem of losing and missing of vulnerable groups, a track record system is designed. The mobile terminal Android system is used as a platform, with the help of Auto Navi Map Android SDK positioning function, realize the positioning data acquisition of mobile terminals; using Apache Tomcat Server and MySQL database to build a Server which haves C/S(the client and the server server architecture. The mobile terminal interacts with the server through the JSON data transmission mode based on the HTTP protocol, and the server saves the relevant information provided by the mobile terminal through the JDBC to the corresponding table in the database. It can be used to monitor the trace of the family and friends, compared with the PC terminal, it is not only more flexible, convenient and fast, but also has the characteristics of real-time and high efficiency. Through the test, all functions can be used normally.

  9. Windows server cookbook for Windows server 2003 and Windows 2000

    CERN Document Server

    Allen, Robbie

    2005-01-01

    This practical reference guide offers hundreds of useful tasks for managing Windows 2000 and Windows Server 2003, Microsoft's latest server. These concise, on-the-job solutions to common problems are certain to save you many hours of time searching through Microsoft documentation. Topics include files, event logs, security, DHCP, DNS, backup/restore, and more

  10. System requirements and design description for the document basis database interface (DocBasis)

    International Nuclear Information System (INIS)

    Lehman, W.J.

    1997-01-01

    This document describes system requirements and the design description for the Document Basis Database Interface (DocBasis). The DocBasis application is used to manage procedures used within the tank farms. The application maintains information in a small database to track the document basis for a procedure, as well as the current version/modification level and the basis for the procedure. The basis for each procedure is substantiated by Administrative, Technical, Procedural, and Regulatory requirements. The DocBasis user interface was developed by Science Applications International Corporation (SAIC)

  11. Design of remote weather monitor system based on embedded web database

    International Nuclear Information System (INIS)

    Gao Jiugang; Zhuang Along

    2010-01-01

    The remote weather monitoring system is designed by employing the embedded Web database technology and the S3C2410 microprocessor as the core. The monitoring system can simultaneously monitor the multi-channel sensor signals, and can give a dynamic Web pages display of various types of meteorological information on the remote computer. It gives a elaborated introduction of the construction and application of the Web database under the embedded Linux. Test results show that the client access the Web page via the GPRS or the Internet, acquires data and uses an intuitive graphical way to display the value of various types of meteorological information. (authors)

  12. Report on the basic design of the FY 1998 technical information database; 1998 nendo gijutsu joho database no kihon sekkei hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    For the purpose of promoting the effective transfer of the results of technical development and the technical information distribution, study and basic design were conducted of a concept of the new technical database run by NEDO. In the study, the following were conducted to extract the subjects: analysis of examples of project databases such as CADDET, scientific technology research information (JCLEARING), evaluation/survey of the present situation of database users, etc. In the study of a concept of technical information database, clarified were the classification of information, definition of the details, finding-out of relations between various kinds of information, mechanism of collection/processing/storage/dispatch/exchange of information, role of NEDO, etc. In the basic design, as the NEDO technical information database viable for the moment, the rough design was conducted of the project information, information on result reports, and information on research institutes. Further, to handle in unity the database controlled in different method, studied were the applicability to the project database of the general combined concept, advantages, restricted items, etc. (NEDO)

  13. Modeling and Design of Capacitive Micromachined Ultrasonic Transducers Based-on Database Optimization

    International Nuclear Information System (INIS)

    Chang, M W; Gwo, T J; Deng, T M; Chang, H C

    2006-01-01

    A Capacitive Micromachined Ultrasonic Transducers simulation database, based on electromechanical coupling theory, has been fully developed for versatile capacitive microtransducer design and analysis. Both arithmetic and graphic configurations are used to find optimal parameters based on serial coupling simulations. The key modeling parameters identified can improve microtransducer's character and reliability effectively. This method could be used to reduce design time and fabrication cost, eliminating trial-and-error procedures. Various microtransducers, with optimized characteristics, can be developed economically using the developed database. A simulation to design an ultrasonic microtransducer is completed as an executed example. The dependent relationship between membrane geometry, vibration displacement and output response is demonstrated. The electromechanical coupling effects, mechanical impedance and frequency response are also taken into consideration for optimal microstructures. The microdevice parameters with the best output signal response are predicted, and microfabrication processing constraints and realities are also taken into consideration

  14. A user's manual for managing database system of tensile property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kim, D. H.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the tensile database system for managing the tensile property test data. The data base constructed the data produced from tensile property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The tensile database system was developed by internet method using Java, PL/SQL, JSP(Java Server Pages) tool

  15. Improvements to the National Transport Code Collaboration Data Server

    Science.gov (United States)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  16. Optimal control of a server farm

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Kulkarni, V.G.; Wijk, van A.C.C.

    2013-01-01

    We consider a server farm consisting of ample exponential servers, that serve a Poisson stream of arriving customers. Each server can be either busy, idle or off. An arriving customer will immediately occupy an idle server, if there is one, and otherwise, an off server will be turned on and start

  17. Austenitic stainless steels, status of the properties database and design rule development

    Energy Technology Data Exchange (ETDEWEB)

    Tavassoli, A.-A. [Commissariat a l`Energie Atomique, CEA-Saclay, Gif-sur Yvette (France). CEREM; Touboul, F. [DMT, Commissariat a l`Energie Atomique, CEA Saclay, Gif-sur-Yvette (France)

    1996-10-01

    In parallel with the new tasks initiated to substantiate the existing database for the reference structural material (type 316LN-IG) of ITER, interim design criteria are being developed to guide subsequent design stages. The French RCC-MR codes for fast breeder reactors, incorporating rules from other ITER partner codes and those needed to meet specific fusion requirements, are used for this purpose. This paper presents the current status of materials data and design rules for type 316LN-IG steel and describes how the irradiation effects are taken into account. (orig.).

  18. Access to digital library databases in higher education: design problems and infrastructural gaps.

    Science.gov (United States)

    Oswal, Sushil K

    2014-01-01

    After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.

  19. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  20. Server farms with setup costs

    NARCIS (Netherlands)

    Gandhi, A.; Harchol-Balter, M.; Adan, I.J.B.F.

    2010-01-01

    In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.

  1. Information Management in Creative Engineering Design and Capabilities of Database Transactions

    DEFF Research Database (Denmark)

    Jacobsen, Kim; Eastman, C. A.; Jeng, T. S.

    1997-01-01

    This paper examines the information management requirements and sets forth the general criteria for collaboration and concurrency control in creative engineering design. Our work attempts to recognize the full range of concurrency, collaboration and complex transactions structure now practiced...... in manual and semi-automated design and the range of capabilities needed as the demands for enhanced but flexible electronic information management unfolds.The objective of this paper is to identify new issues that may advance the use of databases to support creative engineering design. We start...... with a generalized description of the structure of design tasks and how information management in design is dealt with today. After this review, we identify extensions to current information management capabilities that have been realized and/or proposed to support/augment what designers can do now. Given...

  2. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    Science.gov (United States)

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  3. Microsoft Windows Server 2003: Security Enhancements and New Features

    National Research Council Canada - National Science Library

    Montehermoso, Ronald

    2004-01-01

    .... Windows NT and Windows 2000 were known to have numerous security vulnerabilities; hence Microsoft focused on improving security by making Windows Server 2003 secure by design, secure by default, secure in deployment...

  4. Miniaturized Airborne Imaging Central Server System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance-computer-based electronic backend that...

  5. Miniaturized Airborne Imaging Central Server System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance computer-based electronic backend that...

  6. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  7. Pro SQL Server 2008 Analysis Services

    CERN Document Server

    Janus, Philo B

    2009-01-01

    Every business has a reams of business data locked away in databases, business systems, and spreadsheets. While you may be able to build some reports by pulling a few of these repositories together, actually performing any kind of analysis on the data that runs your business can range from problematic to impossible. Pro SQL Server 2008 Analysis Services will show you how to pull that data together and present it for reporting and analysis in a way that makes the data accessible to business users, instead of needing to rely on the IT department every time someone needs a different report. * Acc

  8. Information structure design for databases a practical guide to data modelling

    CERN Document Server

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  9. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  10. NEOS Server 4.0 Administrative Guide

    OpenAIRE

    Dolan, Elizabeth D.

    2001-01-01

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver adminis...

  11. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  12. Implementasi Cluster Server pada Raspberry Pi dengan Menggunakan Metode Load Balancing

    Directory of Open Access Journals (Sweden)

    Ridho Habi Putra

    2016-06-01

    Full Text Available Server merupakan bagian penting dalam sebuah layanan didalam jaringan komputer. Peran server dapat menentukan kualitas baik buruknya dari layanan tersebut. Kegagalan dari sebuah server bisa disebabkan oleh beberapa faktor diantaranya kerusakan perangkat keras, sistem jaringan serta aliran listrik. Salah satu solusi untuk mengatasi kegagalan server dalam suatu jaringan komputer adalah dengan melakukan clustering server.  Tujuan dari penelitian ini adalah untuk mengukur kemampuan Raspberry Pi (Raspi digunakan sebagai web server. Raspberry Pi yang digunakan menggunakan Raspberry Pi 2 Model B dengan menggunakan processor ARM Cortex-A7 berjalan pada frekuensi 900MHz dengan memiliki RAM 1GB. Sistem operasi yang digunakan pada Raspberry Pi adalah Linux Debian Wheezy. Konsep penelitian ini menggunakan empat buah perangkat Raspberry Pi dimana dua Raspi digunakan sebagai web server dan dua Raspi lainnya digunakan sebagai penyeimbang beban (Load Balancer serta database server. Metode yang digunakan dalam pembangunan cluster server ini menggunakan metode load balancing, dimana beban server bekerja secara merata di masing-masing node. Pengujian yang diterapkan dengan melakukan perbandingan kinerja dari Raspbery Pi yang menangani lalu lintas data secara tunggal tanpa menggunakan load balancer serta pengujian Raspberry Pi dengan menggunakan load balancer sebagai beban penyeimbang antara anggota cluster server.

  13. Development of a Personal Digital Assistant (PDA) based client/server NICU patient data and charting system.

    Science.gov (United States)

    Carroll, A E; Saluja, S; Tarczy-Hornoch, P

    2001-01-01

    Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.

  14. The Scandinavian baltic pancreatic club (SBPC) database: design, rationale and characterisation of the study cohort.

    Science.gov (United States)

    Olesen, Søren S; Poulsen, Jakob L; Drewes, Asbjørn M; Frøkjær, Jens B; Laukkarinen, Johanna; Parhiala, Mikael; Rix, Iben; Novovic, Srdan; Lindkvist, Björn; Bexander, Louise; Dimcevski, Georg; Engjom, Trond; Erchinger, Friedemann; Haldorsen, Ingfrid S; Pukitis, Aldis; Ozola-Zālīte, Imanta; Haas, Stephan; Vujasinovic, Miroslav; Löhr, J Matthias; Gulbinas, Antanas; Jensen, Nanna M; Jørgensen, Maiken T; Nøjgaard, Camilla

    2017-08-01

    Chronic pancreatitis (CP) is a multifaceted disease associated with several risk factors and a complex clinical presentation. We established the Scandinavian Baltic Pancreatic Club (SBPC) Database to characterise and study the natural history of CP in a Northern European cohort. Here, we describe the design of the database and characteristics of the study cohort. Nine centres from six different countries in the Scandinavian-Baltic region joined the database. Patients with definitive or probable CP (M-ANNHEIM diagnostic criteria) were included. Standardised case report forms were used to collect several assessment variables including disease aetiology, duration of CP, preceding acute pancreatitis, as well as symptoms, complications, and treatments. The clinical stage of CP was characterised according to M-ANNNHEIM. Yearly follow-up is planned for all patients. The study cohort comprised of 910 patients (608 men: 302 women; median age 58 (IQR: 48-67) years with definite 848 (93%) or probable CP 62 (7%). Nicotine (70%) and alcohol (59%) were the most frequent aetiologies and seen in combination in 44% of patients. A history of recurrent acute pancreatitis was seen in 49% prior to the development of CP. Pain (69%) and exocrine pancreatic insufficiency (68%) were the most common complications followed by diabetes (43%). Most patients (30%) were classified as clinical stage II (symptomatic CP with exocrine or endocrine insufficiency). Less than 10% of the patients had undergone pancreatic surgery. The SBPC database provides a mean for future prospective, observational studies of CP in the Northern European continent.

  15. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    Science.gov (United States)

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  16. Developing of corrosion and creep property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, J. S.; Ryu, W. S.

    2004-01-01

    The corrosion and creep characteristics database systems were constructed using the data produced from corrosion and creep test and designed to hold in common the data and programs of tensile, impact, fatigue characteristics database that was constructed since 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the corrosion and creep characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous test result. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  17. Developing of impact and fatigue property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, D. H.; Ryu, W. S.

    2003-01-01

    The impact and fatigue characteristics database systems were constructed using the data produced from impact and fatigue test and designed to hold in common the data and programs of tensile characteristics database that was constructed on 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the impact and fatigue characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  18. Establishment of Kansei Database and Application to Design for Consensus Building

    Science.gov (United States)

    Yasuda, Keiichi; Shiraki, Wataru

    Reflecting the recent social background where the importance of bridge landscape design is recognized and the new business style of citizen-involved infrastructure development has started, there has been a growing need of design where aesthetic feeling of actual users is reflected. In this research, a focus has been placed on the Kansei engineering technique where users' needs are reflected on product development. A questionnaire survey has been conducted for bridge engineers who are most intensively involved in design work and students as actual users. The result was analyzed by factor analysis and the Hayashi's quantification methods (category I). A tool required at consensus-building occasions has been created to change design elements and display accompanying evaluation difference while using the Kansei database.

  19. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  20. Design And Implementation Of Tool For Detecting Anti-Patterns In Relational Database

    Directory of Open Access Journals (Sweden)

    Gaurav Kumar

    2017-07-01

    Full Text Available Anti-patterns are poor solution to design and im-plementation problems. Developers may introduce anti-patterns in their software systems because of time pressure lack of understanding communication and or-skills. Anti-patterns create problems in software maintenance and development. Database anti-patterns lead to complex and time consuming query process-ing and loss of integrity constraints. Detecting anti-patterns could reduce costs efforts and resources. Researchers have proposed approaches to detect anti-patterns in software development. But not much research has been done about database anti-patterns. This report presents two approaches to detect schema design anti-patterns in relational database. Our first approach is based on pattern matchingwe look into potential candidates based on schema patterns. Second approach is a machine learning based approach we generate features of possible anti-patterns and build SVMbased classifier to detect them. Here we look into these four anti-patterns a Multi-valued attribute b Nave tree based c Entity Attribute Value and dPolymorphic Association . We measure precision and recall of each approach and compare the results. SVM-based approach provides more precision and recall with more training dataset.

  1. Mac OS X Snow Leopard Server For Dummies

    CERN Document Server

    Rizzo, John

    2009-01-01

    Making Everything Easier!. Mac OS® X Snow Leopard Server for Dummies. Learn to::;. Set up and configure a Mac network with Snow Leopard Server;. Administer, secure, and troubleshoot the network;. Incorporate a Mac subnet into a Windows Active Directory® domain;. Take advantage of Unix® power and security. John Rizzo. Want to set up and administer a network even if you don't have an IT department? Read on!. Like everything Mac, Snow Leopard Server was designed to be easy to set up and use. Still, there are so many options and features that this book will save you heaps of time and effort. It wa

  2. Secure Server Login by Using Third Party and Chaotic System

    Science.gov (United States)

    Abdulatif, Firas A.; zuhiar, Maan

    2018-05-01

    Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.

  3. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    Science.gov (United States)

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  4. Development of the Database for Environmental Sound Research and Application (DESRA: Design, Functionality, and Retrieval Considerations

    Directory of Open Access Journals (Sweden)

    Brian Gygi

    2010-01-01

    Full Text Available Theoretical and applied environmental sounds research is gaining prominence but progress has been hampered by the lack of a comprehensive, high quality, accessible database of environmental sounds. An ongoing project to develop such a resource is described, which is based upon experimental evidence as to the way we listen to sounds in the world. The database will include a large number of sounds produced by different sound sources, with a thorough background for each sound file, including experimentally obtained perceptual data. In this way DESRA can contain a wide variety of acoustic, contextual, semantic, and behavioral information related to an individual sound. It will be accessible on the Internet and will be useful to researchers, engineers, sound designers, and musicians.

  5. CERN Document Server (CDS): Introduction

    CERN Multimedia

    CERN. Geneva; Costa, Flavio

    2017-01-01

    A short online tutorial introducing the CERN Document Server (CDS). Basic functionality description, the notion of Revisions and the CDS test environment. Links: CDS Production environment CDS Test environment  

  6. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    Science.gov (United States)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  7. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    International Nuclear Information System (INIS)

    Stepanov, Sergey

    2013-01-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  8. The PETfold and PETcofold web servers for intra- and intermolecular structures of multiple RNA sequences

    DEFF Research Database (Denmark)

    Seemann, Ernst Stefan; Menzel, Karl Peter; Backofen, Rolf

    2011-01-01

    gene. We present web servers to analyze multiple RNA sequences for common RNA structure and for RNA interaction sites. The web servers are based on the recent PET (Probabilistic Evolutionary and Thermodynamic) models PETfold and PETcofold, but add user friendly features ranging from a graphical layer...... to interactive usage of the predictors. Additionally, the web servers provide direct access to annotated RNA alignments, such as the Rfam 10.0 database and multiple alignments of 16 vertebrate genomes with human. The web servers are freely available at: http://rth.dk/resources/petfold/...

  9. Architectural design of a data warehouse to support operational and analytical queries across disparate clinical databases.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam; Wajngurt, David

    2007-10-11

    As the clinical data warehouse of the New York Presbyterian Hospital has evolved innovative methods of integrating new data sources and providing more effective and efficient data reporting and analysis need to be explored. We designed and implemented a new clinical data warehouse architecture to handle the integration of disparate clinical databases in the institution. By examining the way downstream systems are populated and streamlining the way data is stored we create a virtual clinical data warehouse that is adaptable to future needs of the organization.

  10. Design and Implementation of CNEOST Image Database Based on NoSQL System

    Science.gov (United States)

    Wang, Xin

    2014-04-01

    The China Near Earth Object Survey Telescope is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgrade of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of the massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.

  11. Development of Digital Materials Database for Design and Construction of New Power Plants

    International Nuclear Information System (INIS)

    Ren, Weiju

    2008-01-01

    To facilitate materials selection, structural design, and future maintenance of the Generation IV nuclear reactor systems, an interactive, internet accessible materials property database, dubbed Gen IV Materials Handbook, has been under development with the support of the United States Department of Energy. The Handbook will provide an authoritative source of information on structural materials needed for the development of various Gen IV nuclear reactor systems along with powerful data analysis and management tools. In this paper, the background, history, framework, major features, contents, and development strategy of the Gen IV Materials Handbook are discussed. Current development status and future plans are also elucidated.

  12. Development of digital materials database for design and construction of new power plants

    International Nuclear Information System (INIS)

    Ren, W.

    2008-01-01

    To facilitate materials selection, structural design, and future maintenance of the Generation IV nuclear reactor systems, an interactive, internet accessible materials property database, dubbed Gen IV Materials Handbook, has been under development with the support of the United States Department of Energy. The Handbook will provide an authoritative source of information on structural materials needed for the development of various Gen IV nuclear reactor systems along with powerful data analysis and management tools. In this paper the background, history, framework, major features, contents, and development strategy of the Gen IV Materials Handbook are discussed. Current development status and future plans are also elucidated. (authors)

  13. Windows Server 2012 R2 administrator cookbook

    CERN Document Server

    Krause, Jordan

    2015-01-01

    This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.

  14. A Capacity Supply Model for Virtualized Servers

    Directory of Open Access Journals (Sweden)

    Alexander PINNOW

    2009-01-01

    Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.

  15. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    Science.gov (United States)

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  16. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  17. Design and development for updating national 1:50,000 topographic databases in China

    Directory of Open Access Journals (Sweden)

    CHEN Jun

    2010-02-01

    Full Text Available 1.1 Objective Map databases are irreplaceable national treasure of immense importance. Their currency referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million’s kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. 1.2 Methodology The updating of map databases is different than its original creation, and a number of new problems should be solved, such as change detection using latest multi-source data, incremental object revision and relation amendment. The methodology of this paper consists of the following three parts: 1 Examine the four key aspects of map database updating and develop basic updating models/methods, such as currentness-oriented integration of multi-resource data, completeness-based incremental change detection in the context of existing datasets, consistency-aware processing of updated data sets, and user-friendly propagation and services of updates. 2 Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. 3 Design a national 1:50,000 databases updating strategy and its production workflow, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process

  18. Mac OS X Lion Server For Dummies

    CERN Document Server

    Rizzo, John

    2011-01-01

    The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ

  19. Instant SQL Server Analysis Services 2012 Cube Security

    CERN Document Server

    Jayanty, Satya SK

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant Microsoft SQL Server Analysis Services 2012 Cube Security is a practical, hands-on guide that provides a number of clear, step-by-step exercises for getting started with cube security.This book is aimed at Database Administrators, Data Architects, and Systems Administrators who are managing the SQL Server data platform. It is also beneficial for analysis services developers who already have some experience with the technology, but who want to go into more detail on advanced

  20. Analisis Perbandingan Unjuk Kerja Sistem Penyeimbang Beban Web Server dengan HAProxy dan Pound Links

    Directory of Open Access Journals (Sweden)

    Dite Ardian

    2013-04-01

    Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.

  1. ElasticSearch server

    CERN Document Server

    Rogozinski, Marek

    2014-01-01

    This book is a detailed, practical, hands-on guide packed with real-life scenarios and examples which will show you how to implement an ElasticSearch search engine on your own websites.If you are a web developer or a user who wants to learn more about ElasticSearch, then this is the book for you. You do not need to know anything about ElastiSeach, Java, or Apache Lucene in order to use this book, though basic knowledge about databases and queries is required.

  2. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  3. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  4. A Browser-Server-Based Tele-audiology System That Supports Multiple Hearing Test Modalities.

    Science.gov (United States)

    Yao, Jianchu Jason; Yao, Daoyuan; Givens, Gregg

    2015-09-01

    Millions of global citizens suffering from hearing disorders have limited or no access to much needed hearing healthcare. Although tele-audiology presents a solution to alleviate this problem, existing remote hearing diagnosis systems support only pure-tone tests, leaving speech and other test procedures unsolved, due to the lack of software and hardware to enable communication required between audiologists and their remote patients. This article presents a comprehensive remote hearing test system that integrates the two most needed hearing test procedures: a pure-tone audiogram and a speech test. This enhanced system is composed of a Web application server, an embedded smart Internet-Bluetooth(®) (Bluetooth SIG, Kirkland, WA) gateway (or console device), and a Bluetooth-enabled audiometer. Several graphical user interfaces and a relational database are hosted on the application server. The console device has been designed to support the tests and auxiliary communication between the local site and the remote site. The study was conducted at an audiology laboratory. Pure-tone audiogram and speech test results from volunteers tested with this tele-audiology system are comparable with results from the traditional face-to-face approach. This browser-server-based comprehensive tele-audiology offers a flexible platform to expand hearing services to traditionally underserved groups.

  5. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  6. Knowledge Representation and Inference for Analysis and Design of Database and Tabular Rule-Based Systems

    Directory of Open Access Journals (Sweden)

    Antoni Ligeza

    2001-01-01

    Full Text Available Rulebased systems constitute a powerful tool for specification of knowledge in design and implementation of knowledge based systems. They provide also a universal programming paradigm for domains such as intelligent control, decision support, situation classification and operational knowledge encoding. In order to assure safe and reliable performance, such system should satisfy certain formal requirements, including completeness and consistency. This paper addresses the issue of analysis and verification of selected properties of a class of such system in a systematic way. A uniform, tabular scheme of single-level rule-based systems is considered. Such systems can be applied as a generalized form of databases for specification of data pattern (unconditional knowledge, or can be used for defining attributive decision tables (conditional knowledge in form of rules. They can also serve as lower-level components of a hierarchical multi-level control and decision support knowledge-based systems. An algebraic knowledge representation paradigm using extended tabular representation, similar to relational database tables is presented and algebraic bases for system analysis, verification and design support are outlined.

  7. Application of CAPEC Lipid Property Databases in the Synthesis and Design of Biorefinery Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Cunico, Larissa; Gani, Rafiqul

    Petroleum is currently the primary raw material for the production of fuels and chemicals. Consequently, our society is highly dependent on fossil non-renewable resources. However, renewable raw materials are recently receiving increasing interest for the production of chemicals and fuels, so a n...... of biorefinery networks. The objective of this work is to show the application of databases of physical and thermodynamic properties of lipid components to the synthesis and design of biorefinery networks.......]. The wide variety and complex nature of components in biorefineries poses a challenge with respect to the synthesis and design of these types of processes. Whereas physical and thermodynamic property data or models for petroleum-based processes are widely available, most data and models for biobased...

  8. Professional Microsoft SQL Server 2012 Reporting Services

    CERN Document Server

    Turley, Paul; Silva, Thiago; Withee, Ken; Paisley, Grant

    2012-01-01

    A must-have guide for the latest updates to the new release of Reporting Services SQL Server Reporting Services allows you to create reports and business intelligence (BI) solutions. With this updated resource, a team of experts shows you how Reporting Services makes reporting faster, easier and more powerful than ever in web, desktop, and portal solutions. New coverage discusses the new reporting tool called Crescent, BI semantic model's impact on report design and creation, semantic model design, and more. You'll explore the major enhancements to Report Builder and benefit from best practice

  9. Multimedia medical data archive and retrieval server on the Internet

    Science.gov (United States)

    Komo, Darmadi; Levine, Betty A.; Freedman, Matthew T.; Mun, Seong K.; Tang, Y. K.; Chiang, Ted T.

    1997-05-01

    The Multimedia Medical Data Archive and Retrieval Server has been installed at the imaging science and information systems (ISIS) center in Georgetown University Medical Center to provide medical data archive and retrieval support for medical researchers. The medical data includes text, images, sound, and video. All medical data is keyword indexed using a database management system and placed temporarily in a staging area and then transferred to a StorageTek one terabyte tape library system with a robotic arm for permanent archive. There are two methods of interaction with the system. The first method is to use a web browser with HTML functions to perform insert, query, update, and retrieve operations. These generate dynamic SQL calls to the database and produce StorageTek API calls to the tape library. The HTML functions consist of a database, StorageTek interface, HTTP server, common gateway interface, and Java programs. The second method is to issue a DICOM store command, which is translated by the system's DICOM server to SQL calls and then produce StorageTek API calls to the tape library. The system performs as both an Internet and a DICOM server using standard protocols such as HTTP, HTML, Java, and DICOM. Users with proper authentication can log on to the server from anywhere on the Internet using a standard web browser resulting in a user-friendly, open environment, and platform independent solution for archiving multimedia medical data. It represents a complex integration of different components including a robotic tape storage system, database, user-interface, WWW protocols, and TCP/IP networking. The user will only deal with the WWW and DICOM server components of the system, the database and robotic tape library system are transparent and the user will not know that the medical data is stored on magnetic tapes. The server provides the researchers a cost-effective tool for archiving and retrieving medical data across a TCP/IP network environment. It will

  10. Database architectures for Space Telescope Science Institute

    Science.gov (United States)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  11. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    Science.gov (United States)

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  12. Microsoft SQL Server 2012 administration real-world skills for MCSA certification and beyond (exams 70-461, 70-462, and 70-463)

    CERN Document Server

    Carpenter, Tom

    2013-01-01

    Implement, maintain, and repair SQL Server 2012 databases As the most significant update since 2008, Microsoft SQL Server 2012 boasts updates and new features that are critical to understand. Whether you manage and administer SQL Server 2012 or are planning to get your MCSA: SQL Server 2012 certification, this book is the perfect supplement to your learning and preparation. From understanding SQL Server's roles to implementing business intelligence and reporting, this practical book explores tasks and scenarios that a working SQL Server DBA faces regularly and shows you step by ste

  13. Gene composer: database software for protein construct design, codon engineering, and gene synthesis.

    Science.gov (United States)

    Lorimer, Don; Raymond, Amy; Walchli, John; Mixon, Mark; Barrow, Adrienne; Wallace, Ellen; Grice, Rena; Burgin, Alex; Stewart, Lance

    2009-04-21

    To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene assembly procedure with mis-match specific endonuclease

  14. Gene Composer: database software for protein construct design, codon engineering, and gene synthesis

    Directory of Open Access Journals (Sweden)

    Mixon Mark

    2009-04-01

    Full Text Available Abstract Background To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. Results An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. Conclusion We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene

  15. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  16. A new Volcanic managEment Risk Database desIgn (VERDI): Application to El Hierro Island (Canary Islands)

    Science.gov (United States)

    Bartolini, S.; Becerril, L.; Martí, J.

    2014-11-01

    One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.

  17. Professional Team Foundation Server 2012

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2012-01-01

    A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins

  18. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  19. Professional Team Foundation Server 2010

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2011-01-01

    Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides

  20. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  1. Design and implementation of an XML based object-oriented detector description database for CMS

    International Nuclear Information System (INIS)

    Liendl, M.

    2003-04-01

    This thesis deals with the development of a detector description database (DDD) for the compact muon solenoid (CMS) experiment at the large hadron collider (LHC) located at the European organization for nuclear research (CERN). DDD is a fundamental part of the CMS offline software with its main applications, simulation and reconstruction. Both are in need of different models of the detector in order to efficiently solve their specific tasks. In the thesis the requirements to a detector description database are analyzed and the chosen solution is described in detail. It comprises the following components: an XML based detector description language, a runtime system that implements an object-oriented transient representation of the detector, and an application programming interface to be used by client applications. One of the main aspects of the development is the design of the DDD components. The starting point is a domain model capturing concisely the characteristics of the problem domain. The domain model is transformed into several implementation models according to the guidelines of the model driven architecture (MDA). Implementation models and appropriate refinements thereof are foundation for adequate implementations. Using the MDA approach, a fully functional prototype was realized in C++ and XML. The prototype was successfully tested through seamless integration into both the simulation and the reconstruction framework of CMS. (author)

  2. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  3. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    Science.gov (United States)

    Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa

    2010-12-13

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  4. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    Directory of Open Access Journals (Sweden)

    Wataru Iwasaki

    Full Text Available In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration. The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past. The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  5. jSPyDB, an open source database-independent tool for data management

    Science.gov (United States)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  6. jSPyDB, an open source database-independent tool for data management

    International Nuclear Information System (INIS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  7. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  8. Design and development of a geo-referenced database to radionuclides in food

    Science.gov (United States)

    Nascimento, L. M. E.; Ferreira, A. C. M.; Gonzalez, S. A.

    2018-03-01

    The primary purpose of the range of activities concerning the info management of the environmental assessment is to provide to scientific community an improved access to environmental data, as well as to support the decision making loop, in case of contamination events due either to accidental or intentional causes. In recent years, geotechnologies became a key reference in environmental research and monitoring, since they deliver an efficient data retrieval and subsequent processing about natural resources. This study aimed at the development of a georeferenced database (SIGLARA – SIstema Georeferenciado Latino Americano de Radionuclídeos em Alimentos), designed to radioactivity in food data storage, available in three languages (Spanish, Portuguese and English), employing free software[l].

  9. The method of abstraction in the design of databases and the interoperability

    Science.gov (United States)

    Yakovlev, Nikolay

    2018-03-01

    When designing the database structure oriented to the contents of indicators presented in the documents and communications subject area. First, the method of abstraction is applied by expansion of the indices of new, artificially constructed abstract concepts. The use of abstract concepts allows to avoid registration of relations many-to-many. For this reason, when built using abstract concepts, demonstrate greater stability in the processes. The example abstract concepts to address structure - a unique house number. Second, the method of abstraction can be used in the transformation of concepts by omitting some attributes that are unnecessary for solving certain classes of problems. Data processing associated with the amended concepts is more simple without losing the possibility of solving the considered classes of problems. For example, the concept "street" loses the binding to the land. The content of the modified concept of "street" are only the relations of the houses to the declared name. For most accounting tasks and ensure communication is enough.

  10. Design and development of a geo-referenced database to radionuclides in food

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Lucia Maria Evangelista do; Ferreira, Ana Cristina de Melo; Gonzalez, Sergio de Albuquerque, E-mail: anacris@ird.gov.br [Instituto de Radioproteção e Dosimetria (RD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The primary purpose of the range of activities concerning the info management of the environmental assessment is to provide to scientific community an improved access to environmental data, as well as to support the decision making loop, in case of contamination events due either to accidental or intentional causes. In recent years, geotechnologies became a key reference in environmental research and monitoring, since they deliver an efficient data retrieval and subsequent processing about natural resources. This study aimed at the development of a georeferenced database (SIGLARA - Sistema Georeferenciado Latino Americano de Radionuclídeos em Alimentos), designed to radioactivity in food data storage, available in three languages (Spanish, Portuguese and English), employing free software. (author)

  11. Design and development of a geo-referenced database to radionuclides in food

    International Nuclear Information System (INIS)

    Nascimento, Lucia Maria Evangelista do; Ferreira, Ana Cristina de Melo; Gonzalez, Sergio de Albuquerque

    2017-01-01

    The primary purpose of the range of activities concerning the info management of the environmental assessment is to provide to scientific community an improved access to environmental data, as well as to support the decision making loop, in case of contamination events due either to accidental or intentional causes. In recent years, geotechnologies became a key reference in environmental research and monitoring, since they deliver an efficient data retrieval and subsequent processing about natural resources. This study aimed at the development of a georeferenced database (SIGLARA - Sistema Georeferenciado Latino Americano de Radionuclídeos em Alimentos), designed to radioactivity in food data storage, available in three languages (Spanish, Portuguese and English), employing free software. (author)

  12. The ATLAS Wide-Range Database & Application Monitoring

    CERN Document Server

    Vasileva, Petya Tsvetanova; The ATLAS collaboration

    2018-01-01

    In HEP experiments at LHC the database applications often become complex by reflecting the ever demanding requirements of the researchers. The ATLAS experiment has several Oracle DB clusters with over 216 database schemes each with its own set of database objects. To effectively monitor them, we designed a modern and portable application with exceptionally good characteristics. Some of them include: concise view of the most important DB metrics; top SQL statements based on CPU, executions, block reads, etc.; volume growth plots per schema and DB object type; database jobs section with signaling for problematic ones; in-depth analysis in case of contention on data or processes. This contribution describes also the technical aspects of the implementation. The project can be separated into three independent layers. The first layer consists in highly-optimized database objects hiding all complicated calculations. The second layer represents a server providing REST access to the underlying database backend. The th...

  13. Learning SQL Server Reporting Services 2012

    CERN Document Server

    Krishnaswamy, Jayaram

    2013-01-01

    The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.

  14. Implementing Citrix XenServer Quickstarter

    CERN Document Server

    Ahmed, Gohar

    2013-01-01

    Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati

  15. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  16. 2MASS Catalog Server Kit Version 2.1

    Science.gov (United States)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  17. Building server capabilities in China

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum

    2012-01-01

    The purpose of this paper is to further our understanding of multinational companies building server capabilities in China. The paper is based on the cases of two western companies with operations in China. The findings highlight a number of common patterns in the 1) managerial challenges related...

  18. Client-server password recovery

    NARCIS (Netherlands)

    Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  19. Client-Server Password Recovery

    NARCIS (Netherlands)

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  20. Team Foundation Server 2013 customization

    CERN Document Server

    Beeming, Gordon

    2014-01-01

    This book utilizes a tutorial based approach, focused on the practical customization of key features of the Team Foundation Server for collaborative enterprise software projects.This practical guide is intended for those who want to extend TFS. This book is for intermediate users who have an understanding of TFS, and basic coding skills will be required for the more complex customizations.

  1. A relational database for physical data from TJ-II discharges

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.

    2002-01-01

    A relational database (RDB) has been developed for classifying TJ-II experimental data according to physical criteria. Two objectives have been achieved: the design and the implementation of the database and the software tools for data access depending on a single software driver. TJ-II data were arranged in several tables with a flexible design, speedy performance, efficient search capacity and adaptability to meet present and future, requirements. The software has been developed to allow the access to the TJ-II RDB from a variety of computer platforms (ALPHA AXP/True64 UNIX, CRAY/UNICOS, Intel Linux, Sparc/Solaris and Intel/Windows 95/98/NT) and programming languages (FORTRAN and C/C++). The database resides in a Windows NT Server computer and is managed by Microsoft SQL Server. The access software is based on open network computing remote procedure call and follows client/server model. A server program running in the Windows NT computer controls data access. Operations on the database (through a local ODBC connection) are performed according to predefined permission protocols. A client library providing a set of basic functions for data integration and retrieval has been built in both static and dynamic link versions. The dynamic version is essential in accessing RDB data from 4GL environments (IDL and PV-WAVE among others)

  2. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  3. [Design and implementation of medical instrument standard information retrieval system based on APS.NET].

    Science.gov (United States)

    Yu, Kaijun

    2010-07-01

    This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.

  4. GeoServer: il server geospaziale Open Source novità della nuova versione 2.3.0

    Directory of Open Access Journals (Sweden)

    Simone Giannecchini

    2013-04-01

    Full Text Available GeoServer è un server geospaziale Open Source sviluppato con tecnologia Java Enterprise per la gestione e l’editing di dati geospaziali secondo gli standard OGC e ISO Technical Committee 211. Esso fornisce le funzionalità di base per creareinfrastrutture spaziali di dati (SDI ed è progettato per essere interoperabile potendo pubblicare dati provenienti da ogni tipo di fonte spaziale utilizzando standard aperti.Open Source GeoSpatial server developed with Java Enterprise technology for managing, sharing and editing geospatial data according to the OGC and ISO TC 211 standards. GeoServer provides the basic functionalities to create spatial data infrastructures (SDI.GeoServer is designed for interoperability, it publishes data from any major spatial data source using open standards: it is the reference implementation of the Open Geospatial Consortium (OGC Web Feature Service (WFS and Web Coverage Service (WCS standards, as well as a highperformance certified compliant Web Map Service (WMS. GeoServer forms a core component of the Geospatial Web.

  5. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  6. Expitope: a web server for epitope expression.

    Science.gov (United States)

    Haase, Kerstin; Raffegerst, Silke; Schendel, Dolores J; Frishman, Dmitrij

    2015-06-01

    Adoptive T cell therapies based on introduction of new T cell receptors (TCRs) into patient recipient T cells is a promising new treatment for various kinds of cancers. A major challenge, however, is the choice of target antigens. If an engineered TCR can cross-react with self-antigens in healthy tissue, the side-effects can be devastating. We present the first web server for assessing epitope sharing when designing new potential lead targets. We enable the users to find all known proteins containing their peptide of interest. The web server returns not only exact matches, but also approximate ones, allowing a number of mismatches of the users choice. For the identified candidate proteins the expression values in various healthy tissues, representing all vital human organs, are extracted from RNA Sequencing (RNA-Seq) data as well as from some cancer tissues as control. All results are returned to the user sorted by a score, which is calculated using well-established methods and tools for immunological predictions. It depends on the probability that the epitope is created by proteasomal cleavage and its affinities to the transporter associated with antigen processing and the major histocompatibility complex class I alleles. With this framework, we hope to provide a helpful tool to exclude potential cross-reactivity in the early stage of TCR selection for use in design of adoptive T cell immunotherapy. The Expitope web server can be accessed via http://webclu.bio.wzw.tum.de/expitope. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The effect of wild card designations and rare alleles in forensic DNA database searches

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Bright, Jo-Anne; Buckleton, John S

    2015-01-01

    Forensic DNA databases are powerful tools used for the identification of persons of interest in criminal investigations. Typically, they consist of two parts: (1) a database containing DNA profiles of known individuals and (2) a database of DNA profiles associated with crime scenes. The risk...... of adventitious or chance matches between crimes and innocent people increases as the number of profiles within a database grows and more data is shared between various forensic DNA databases, e.g. from different jurisdictions. The DNA profiles obtained from crime scenes are often partial because crime samples...

  8. The PEP-II/BaBar Project-Wide Database using World Wide Web and Oracle*Case

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; MacGregor, I.; Meyer, S.

    1995-12-01

    The PEP-II/BaBar Project Database is a tool for monitoring the technical and documentation aspects of the accelerator and detector construction. It holds the PEP-II/BaBar design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, components fabrication and calibration data, survey and alignment data, property control, CAD drawings, publications and documentation. This central Oracle database on a UNIX server is built using Oracle*Case tools. Users at the collaborating laboratories mainly access the data using World Wide Web (WWW). The Project Database is being extended to link to legacy databases required for the operations phase

  9. A user's manual for the database management system of impact property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kong, W. S.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the impact database system for managing the impact property test data. The data base constructed the data produced from impact property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The impact database system was developed by internet method using jsp(Java Server pages) tool

  10. Web Server Configuration for an Academic Intranet

    National Research Council Canada - National Science Library

    Baltzis, Stamatios

    2000-01-01

    .... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...

  11. Implementation of SRPT Scheduling in Web Servers

    National Research Council Canada - National Science Library

    Harchol-Balter, Mor

    2000-01-01

    .... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...

  12. Locating Nearby Copies of Replicated Internet Servers

    National Research Council Canada - National Science Library

    Guyton, James D; Schwartz, Michael F

    1995-01-01

    In this paper we consider the problem of choosing among a collection of replicated servers focusing on the question of how to make choices that segregate client/server traffic according to network topology...

  13. SFCOMPO 2.0 - A relational database of spent fuel isotopic measurements, reactor operational histories, and design data

    Science.gov (United States)

    Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian

    2017-09-01

    SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.

  14. SFCOMPO 2.0 – A relational database of spent fuel isotopic measurements, reactor operational histories, and design data

    Directory of Open Access Journals (Sweden)

    Michel-Sendis Franco

    2017-01-01

    Full Text Available SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA. The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.

  15. The design schemes of database and intelligent local controller in the SRRC control system

    International Nuclear Information System (INIS)

    Wang, C.J.; Chen, Jenny; Chen, J.S.; Jan, G.J.

    1994-01-01

    The control system of the SRRC has been utilized to facilitate commisioning since the beginning, and it provides operators an easy to use environment. Hence, we would like to discuss the design schemes and relationships between the user's interface, the database and the ILC (Intelligent Local Controller) levels. The whole control system in SRRC is a two-level design connected by Ethernet. From operator's view, the upper level is the CONSOLE level and the lower one is the ILC level. Those signals from, or to, equipment are connected to ILCs through analog/digital interfaces, GPIB buses, RS232 serial links, etc.; the ILC is an IEEE 1014 bus (VMEbus) based system running PSOS+ real-time multi-tasking kernel and PNA+ (TCP/IP protocols) communication software. The control software of CONSOLE level is developed in the VMS operating system on DEC workstations, and The Graphic User Interfaces are built on the X-Window/Motif environment. The control system has fulfilled the expectations of the facility commissioning group. It has also proved to be a simple, stable, accurate, easily maintained system. ((orig.))

  16. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  17. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves

  18. NRSAS: Nuclear Receptor Structure Analysis Servers.

    NARCIS (Netherlands)

    Bettler, E.J.M.; Krause, R.; Horn, F.; Vriend, G.

    2003-01-01

    We present a coherent series of servers that can perform a large number of structure analyses on nuclear hormone receptors. These servers are part of the NucleaRDB project, which provides a powerful information system for nuclear hormone receptors. The computations performed by the servers include

  19. SQL Server Analysis Services 2012 cube development cookbook

    CERN Document Server

    Dewald, Baya; Hughes, Steve

    2013-01-01

    A practical cookbook packed with recipes to help developers produce data cubes as quickly as possible by following step by step instructions, rather than explaining data mining concepts with SSAS.If you are a BI or ETL developer using SQL Server Analysis services to build OLAP cubes, this book is ideal for you. Prior knowledge of relational databases and experience with Excel as well as SQL development is required.

  20. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    Science.gov (United States)

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  1. Designing a Lexical Database for a Combined Use of Corpus Annotation and Dictionary Editing

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas; Langer, Gabriele

    2016-01-01

    In a combined corpus-dictionary project, you would need one lexical database that could serve as a shared “backbone” for both corpus annotation and dictionary editing, but it is not that easy to define a database structure that applies satisfactorily to both these purposes. In this paper, we...... will exemplify the problem and present ideas on how to model structures in a lexical database that facilitate corpus annotation as well as dictionary editing. The paper is a joint work between the DGS Corpus Project and the DTS Dictionary Project. The two projects come from opposite sides of the spectrum (one...... adjusting a lexical database grown from dictionary making for corpus annotating, one building a lexical database in parallel with corpus annotation and editing a corpus-based dictionary), and we will consider requirements and feasible structures for a database that can serve both corpus and dictionary....

  2. PENGEMBANGAN ANTIVIRUS BERBASIS CLIENT SERVER

    Directory of Open Access Journals (Sweden)

    Richki Hardi

    2015-07-01

    Full Text Available The era of globalization is included era where the komputer virus has been growing rapidly, not only of mere academic research but has become a common problem for komputer users in the world. The effect of this loss is increasingly becoming the widespread use of the Internet as a global communication line between komputer users around the world, based on the results of the survey CSI / FB. Along with the progress, komputer viruses undergo some evolution in shape, characteristics and distribution medium such as Worms, Spyware Trojan horse and program Malcodelain. Through the development of server-based antivirus clien then the user can easily determine the behavior of viruses and worms, knowing what part of an operating system that is being attacked by viruses and worms, making itself a development of network-based antivirus client server and can also be relied upon as an engine fast and reliable scanner to recognize the virus and saving in memory management.

  3. CERN servers go to Mexico

    CERN Multimedia

    Stefania Pandolfi

    2015-01-01

    On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico.   CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...

  4. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  5. Mastering Microsoft Windows Server 2008 R2

    CERN Document Server

    Minasi, Mark; Finn, Aidan

    2010-01-01

    The one book you absolutely need to get up and running with Windows Server 2008 R2. One of the world's leading Windows authorities and top-selling author Mark Minasi explores every nook and cranny of the latest version of Microsoft's flagship network operating system, Windows Server 2008 R2, giving you the most in-depth coverage in any book on the market.: Focuses on Windows Windows Server 2008 R2, the newest version of Microsoft's Windows' server line of operating system, and the ideal server for new Windows 7 clients; Author Mark Minasi is one of the world's leading Windows authorities and h

  6. Microsoft® SQL Server® 2008 Step by Step

    CERN Document Server

    Hotek, Mike

    2009-01-01

    Teach yourself SQL Server 2008-one step at a time. Get the practical guidance you need to build database solutions that solve real-world business problems. Learn to integrate SQL Server data in your applications, write queries, develop reports, and employ powerful business intelligence systems.Discover how to:Install and work with core components and toolsCreate tables and index structuresManipulate and retrieve dataSecure, manage, back up, and recover databasesApply tuning plus optimization techniques to generate high-performing database applicationsOptimize availability through clustering, d

  7. The PEP-II project-wide database

    International Nuclear Information System (INIS)

    Chan, A.; Calish, S.; Crane, G.; MacGregor, I.; Meyer, S.; Wong, J.

    1995-05-01

    The PEP-II Project Database is a tool for monitoring the technical and documentation aspects of this accelerator construction. It holds the PEP-II design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, magnet and vacuum fabrication data. CAD drawings, publications and documentation, survey and alignment data and property control. The database can be extended to contain information required for the operations phase of the accelerator and detector. Features such as viewing CAD drawing graphics from the database will be implemented in the future. This central Oracle database on a UNIX server is built using ORACLE Case tools. Users at the three collaborating laboratories (SLAC, LBL, LLNL) can access the data remotely, using various desktop computer platforms and graphical interfaces

  8. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa; Gearhart, Jared Lee; Jones, Katherine A

    2016-05-01

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  9. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  10. Framework Model for Database Replication within the Availability Zones

    OpenAIRE

    Al-Mughrabi, Ala'a Atallah; Owaied, Hussein

    2013-01-01

    This paper presents a proposed model for database replication model in private cloud availability regions, which is an enhancement of the SQL Server AlwaysOn Layers of Protection Model presents by Microsoft in 2012. The enhancement concentrates in the database replication for private cloud availability regions through the use of primary and secondary servers. The processes of proposed model during the client send Write/Read Request to the server, in synchronous and semi synchronous replicatio...

  11. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. VOMS Server replication process in I2G and EELA Project

    International Nuclear Information System (INIS)

    Rodrigues Silva, B.; Gomes, J.; Montecelo, M.

    2007-01-01

    This paper presents the VOMS server replication process in the I2G and EELA projects, including a detailed description about how to implement the MySQL database replication. After that it enumerates most of the VOMS server replication related problems and their respective solutions implemented at LIP in Portugal. In the end a proposal is made about how to extend this approach to other core service named LCG File Catalog. (Author)

  13. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  14. Rancang Bangun Keanggotaan Perpustakaan STT Telematika Telkom Menggunakan RFID Berbasis Java 2 Standard Edition Dengan Konsep Client Server

    Directory of Open Access Journals (Sweden)

    Yana Yuniarsyah

    2013-05-01

    Full Text Available RFID technology is a new technology that hasn’t been widely applied. The existence of this technology can reduce the disadvantages of barcode technology. One application of RFID technology is used for a library card. STT Telematika Library is a library that uses a membership card to borrow and return transactions only. The existence of RFID technology in the card member can create a multifunctional card, in addition to borrow and return books transactions, membership cards can be used for visitor attendance too. Distribution of visitor attendance and report library using client-server concept, thus make it easier for librarians in data management. The programming language used in the design of Library Information System is a Java 2 Standard Edition (J2SE using NetBeans 7.0 as IDE. Storage Library using the MySQL database. Software design method using waterfall or linear sequential models. Model design to make information sistem using Unified Modeling Language (UML like usecase diagram, activity diagram, and class diagram. Database design model using Entity Relationship Diagram (ERD for development information library system. Testing library information system have form with testing user requirements, test the program using blacbox testing, and testing the user. RFID used for library information systems have form such as RFID reader which used to read the information carried by the RFID tag and RFID tag used to transmit information to the RFID reader. The success of the client-server concept comes from the success of visitor attendance and show a report from the client, and the success of server to store visitor attendance data.

  15. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  16. Designing an international industrial hygiene database of exposures among workers in the asphalt industry.

    Science.gov (United States)

    Burstyn, I; Kromhout, H; Cruise, P J; Brennan, P

    2000-01-01

    The objective of this project was to construct a database of exposure measurements which would be used to retrospectively assess the intensity of various exposures in an epidemiological study of cancer risk among asphalt workers. The database was developed as a stand-alone Microsoft Access 2.0 application, which could work in each of the national centres. Exposure data included in the database comprised measurements of exposure levels, plus supplementary information on production characteristics which was analogous to that used to describe companies enrolled in the study. The database has been successfully implemented in eight countries, demonstrating the flexibility and data security features adequate to the task. The database allowed retrieval and consistent coding of 38 data sets of which 34 have never been described in peer-reviewed scientific literature. We were able to collect most of the data intended. As of February 1999 the database consisted of 2007 sets of measurements from persons or locations. The measurements appeared to be free from any obvious bias. The methodology embodied in the creation of the database can be usefully employed to develop exposure assessment tools in epidemiological studies.

  17. Extensible Synthetic File Servers? or: Structuring the Glue between Tester and System Under Test

    NARCIS (Netherlands)

    Belinfante, Axel; Mullender, Sape J.; Collyer, G.

    2007-01-01

    We discuss a few simple scenarios of how we can design and develop a compositional synthetic ��?le server that gives access to external processes – in particular, in the context of testing, gives access to the system under test – such that certain parts of said synthethic ��?le server can be

  18. Robust client/server shared state interactions of collaborative process with system crash and network failures

    NARCIS (Netherlands)

    Wang, Lei; Wombacher, Andreas; Ferreira Pires, Luis; van Sinderen, Marten J.; Chi, Chihung

    With the possibility of system crashes and network failures, the design of robust client/server interactions for collaborative process execution is a challenge. If a business process changes state, it sends messages to relevant processes to inform about this change. However, server crashes and

  19. Performance of Distributed Query Optimization in Client/Server Systems

    NARCIS (Netherlands)

    Skowronek, J.; Blanken, Henk; Wilschut, A.N.

    The design, implementation and performance of an optimizer for a nested query language is considered. The optimizer operates in a client/server environment, in particular an Intranet setting. The paper deals with the scalability challenge by tackling the load of many clients by allocating optimizer

  20. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    Science.gov (United States)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  1. Quantifying Precision and Availability of Location Memory in Everyday Pictures and Some Implications for Picture Database Design

    Science.gov (United States)

    Lansdale, Mark W.; Oliff, Lynda; Baguley, Thom S.

    2005-01-01

    The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is…

  2. CBD: a biomarker database for colorectal cancer.

    Science.gov (United States)

    Zhang, Xueli; Sun, Xiao-Feng; Cao, Yang; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong

    2018-01-01

    Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD.Database URL: http://sysbio.suda.edu.cn/CBD/.

  3. Comparing speed of Web Map Service with GeoServer on ESRI Shapefile and PostGIS

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2016-07-01

    Full Text Available There are several options how to configure Web Map Service using severalmap servers. GeoServer is one of most popular map servers nowadays.GeoServer is able to read data from several sources. Very popular datasource is ESRI Shapefile. It is well documented and most of softwarefor geodata processing is able to read and write data in this format.Another very popular data store is PostgreSQL/PostGIS object-relationaldatabase. Both data sources has advantages and disadvantages and userof GeoServer has to decide which one to use. The paper describescomparison of performance of GeoServer Web Map Service when readingdata from ESRI Shapefile or from PostgreSQL/PostGIS database.

  4. Design and implementation of component reliability database management system for NPP

    International Nuclear Information System (INIS)

    Kim, S. H.; Jung, J. K.; Choi, S. Y.; Lee, Y. H.; Han, S. H.

    1999-01-01

    KAERI is constructing the component reliability database for Korean nuclear power plant. This paper describes the development of data management tool, which runs for component reliability database. This is running under intranet environment and is used to analyze the failure mode and failure severity to compute the component failure rate. Now we are developing the additional modules to manage operation history, test history and algorithms for calculation of component failure history and reliability

  5. Adventures in the evolution of a high-bandwidth network for central servers

    International Nuclear Information System (INIS)

    Swartz, K.L.; Cottrell, L.; Dart, M.

    1994-08-01

    In a small network, clients and servers may all be connected to a single Ethernet without significant performance concerns. As the number of clients on a network grows, the necessity of splitting the network into multiple sub-networks, each with a manageable number of clients, becomes clear. Less obvious is what to do with the servers. Group file servers on subnets and multihomed servers offer only partial solutions -- many other types of servers do not lend themselves to a decentralized model, and tend to collect on another, well-connected but overloaded Ethernet. The higher speed of FDDI seems to offer an easy solution, but in practice both expense and interoperability problems render FDDI a poor choice. Ethernet switches appear to permit cheaper and more reliable networking to the servers while providing an aggregate network bandwidth greater than a simple Ethernet. This paper studies the evolution of the server networks at SLAC. Difficulties encountered in the deployment of FDDI are described, as are the tools and techniques used to characterize the traffic patterns on the server network. Performance of Ethernet, FDDI, and switched Ethernet networks is analyzed, as are reliability and maintainability issues for these alternatives. The motivations for re-designing the SLAC general server network to use a switched Ethernet instead of FDDI are described, as are the reasons for choosing FDDI for the farm and firewall networks at SLAC. Guidelines are developed which may help in making this choice for other networks

  6. The weighted 2-server problem

    Czech Academy of Sciences Publication Activity Database

    Chrobak, M.; Sgall, Jiří

    2004-01-01

    Roč. 324, 2-3 (2004), s. 289-319 ISSN 0304-3975 R&D Projects: GA MŠk ME 103; GA MŠk ME 476; GA ČR GA201/01/1195; GA MŠk LN00A056; GA AV ČR IAA1019901; GA AV ČR IAA1019401 Institutional research plan: CEZ:AV0Z1019905 Keywords : online algorithms * k- server problem Subject RIV: BA - General Mathematics Impact factor: 0.676, year: 2004

  7. Measuring SIP proxy server performance

    CERN Document Server

    Subramanian, Sureshkumar V

    2013-01-01

    Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr

  8. Virtualisasi Server Sederhana Menggunakan Proxmox

    Directory of Open Access Journals (Sweden)

    Teguh Prasandy

    2015-05-01

    Penggunaan proxmox sebagai virtual server bahwa proxmox menyediakan sebuah desktop local dan beberapa node. Di dalam node tersebut sistem operasi akan diinstal sesuai dengan kebutuhan dari user. Routing IP supaya sistem operasi yang berada di dalam proxmox dapat terkoneksi ke internet dengan IP Desktop Virtual box 192.168.56.102 IP ini digunakan sebagai gateway proxmox dan sistem operasi di dalamnya, IP Proxmox 192.168.56.105, IP Linux Ubuntu 192.168.56.109, IP Linux Debian 192.168.56.108.

  9. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  10. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    Directory of Open Access Journals (Sweden)

    Jordi Vilaplana

    2014-01-01

    Full Text Available Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (reannotation of proteomes, to properly identify both the individual proteins involved in the process(es of interest and their function. It also enables the sets of proteins involved in the process(es in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  11. STEPS OF THE DESIGN OF CLOUD ORIENTED LEARNING ENVIRONMENT IN THE STUDY OF DATABASES FOR FUTURE TEACHERS OF INFORMATICS

    Directory of Open Access Journals (Sweden)

    Oleksandr M. Kryvonos

    2018-02-01

    Full Text Available The article describes the introduction of cloud services in the educational process of the discipline «Databases» of future teachers of informatics and the design of the cloud oriented learning environment on their basis. An analysis of the domestic experience of forming a cloud oriented learning environment of educational institutions is carried out, given interpretation of concepts «cloud oriented distance learning system», «cloud oriented learning environment in the study of databases», «the design of the cloud oriented learning environment in the study of databases for future teachers of informatics». The following stages of designing COLE are selected and described: targeted, conceptual, meaningful, component, introductory, appraisal-generalization. The structure of the educational interaction of subjects in the study of databases in the conditions of the COLE is developed by the means of the cloud oriented distance learning system Canvas, consisting of communication tools, joint work, and planning of educational events, cloud storages.

  12. LiveBench-1: continuous benchmarking of protein structure prediction servers.

    Science.gov (United States)

    Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L

    2001-02-01

    We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.

  13. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  14. CERN servers donated to Ghana

    CERN Multimedia

    CERN Bulletin

    2012-01-01

    Cutting-edge research requires a constantly high performance of the computing equipment. At the CERN Computing Centre, computers typically need to be replaced after about four years of use. However, while servers may be withdrawn from cutting-edge use, they are still good for other uses elsewhere. This week, 220 servers and 30 routers were donated to the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana.   “KNUST will provide a good home for these computers. The university has also developed a plan for using them to develop scientific collaboration with CERN,” said John Ellis, a professor at King’s College London and a visiting professor in CERN’s Theory Group.  John Ellis was heavily involved in building the relationship with Ghana, which started in 2006 when a Ghanaian participated in the CERN openlab student programme. Since 2007 CERN has hosted Ghanaians especially from KNUST in the framework of the CERN Summer Student Progr...

  15. Home media server content management

    Science.gov (United States)

    Tokmakoff, Andrew A.; van Vliet, Harry

    2001-07-01

    With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.

  16. Passive Detection of Misbehaving Name Servers

    Science.gov (United States)

    2013-10-01

    name servers that changed IP address five or more times in a month. Solid red line indicates those servers possibly linked to pharmaceutical scams . 12...malicious and stated that fast-flux hosting “is considered one of the most serious threats to online activities today” [ICANN 2008, p. 2]. The...that time, apparently independent of filters on name-server flux, a large number of pharmaceutical scams1 were taken down. These scams apparently

  17. Record Recommendations for the CERN Document Server

    CERN Document Server

    AUTHOR|(CDS)2096025; Marian, Ludmila

    CERN Document Server (CDS) is the institutional repository of the European Organization for Nuclear Research (CERN). It hosts all the research material produced at CERN, as well as multi- media and administrative documents. It currently has more than 1.5 million records grouped in more than 1000 collections. It’s underlying platform is Invenio, an open source digital library system created at CERN. As the size of CDS increases, discovering useful and interesting records becomes more challenging. Therefore, the goal of this work is to create a system that supports the user in the discovery of related interesting records. To achieve this, a set of recommended records are displayed on the record page. These recommended records are based on the analyzed behavior (page views and downloads) of other users. This work will describe the methods and algorithms used for creating, implementing, and the integration with the underlying software platform, Invenio. A very important decision factor when designing a recomme...

  18. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  19. Towards Big Earth Data Analytics: The EarthServer Approach

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  20. Mastering Windows Server 2008 Networking Foundations

    CERN Document Server

    Minasi, Mark; Mueller, John Paul

    2011-01-01

    Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co

  1. National Medical Terminology Server in Korea

    Science.gov (United States)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  2. A tandem queue with delayed server release

    OpenAIRE

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he is blocking his server in station 1 until he completes service in station 2, whereupon the server in station 1 is released. An analysis of the generating function of the simultaneous probability di...

  3. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    Science.gov (United States)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  4. From Passive to Active in the Design of External Radiotherapy Database at Oncology Institute

    Directory of Open Access Journals (Sweden)

    Valentin Ioan CERNEA

    2009-12-01

    Full Text Available Implementation during 1997 of a computer network at Oncology Institute “Prof. Dr. Ion Chiricuţă" from Cluj-Napoca (OICN opens the era of patient electronic file where the presented database is included. The database developed before 2000, used till December 2006 in all reports of OICN has collected data from primary documents as radiotherapy files. Present level of the computer network permits to change the sense of data from computer to primary document. Now the primary document is built firstly electronically inside the computer, and secondly, after validation is printed as a known document. The paper discusses the issues concerning safety, functionality and access derived.

  5. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  6. Distributed control system for demand response by servers

    Science.gov (United States)

    Hall, Joseph Edward

    Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.

  7. MotifNet: a web-server for network motif analysis.

    Science.gov (United States)

    Smoly, Ilan Y; Lerman, Eugene; Ziv-Ukelson, Michal; Yeger-Lotem, Esti

    2017-06-15

    Network motifs are small topological patterns that recur in a network significantly more often than expected by chance. Their identification emerged as a powerful approach for uncovering the design principles underlying complex networks. However, available tools for network motif analysis typically require download and execution of computationally intensive software on a local computer. We present MotifNet, the first open-access web-server for network motif analysis. MotifNet allows researchers to analyze integrated networks, where nodes and edges may be labeled, and to search for motifs of up to eight nodes. The output motifs are presented graphically and the user can interactively filter them by their significance, number of instances, node and edge labels, and node identities, and view their instances. MotifNet also allows the user to distinguish between motifs that are centered on specific nodes and motifs that recur in distinct parts of the network. MotifNet is freely available at http://netbio.bgu.ac.il/motifnet . The website was implemented using ReactJs and supports all major browsers. The server interface was implemented in Python with data stored on a MySQL database. estiyl@bgu.ac.il or michaluz@cs.bgu.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Database System Design and Implementation for Marine Air-Traffic-Controller Training

    Science.gov (United States)

    2017-06-01

    units used larger applications such as Microsoft Access or MySQL . These systems have outdated platforms, and individuals currently maintaining these...Oracle Database 12c was version 12.2.0.20.96, IDE version 12.2.1.0.42.151001.0541. SQL Developer was version 4.1.3.20.96, which used Java platform

  9. Term Relevance Feedback and Mediated Database Searching: Implications for Information Retrieval Practice and Systems Design.

    Science.gov (United States)

    Spink, Amanda

    1995-01-01

    This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)

  10. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  11. The visualCMAT: A web-server to select and interpret correlated mutations/co-evolving residues in protein families.

    Science.gov (United States)

    Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas

    2018-04-01

    The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https

  12. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  13. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  14. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    Science.gov (United States)

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive.

  15. The database system for the management of technical documentations of PWR fuel design project using CD-ROM

    International Nuclear Information System (INIS)

    Park, Bong Sik; Lee, Won Jae; Ryu, Jae Kwon; Jo, In Hang; Chang, Jong Hwa.

    1996-12-01

    In this report, the database system developed for the management of technical documentation of PWR fuel design project using CD-ROM (compact disk - read only memory) is described. The database system, KIRDOCM (KAERI Initial and Reload Fuel project technical documentation management), is developed and installed on PC using Visual Foxpro 3.0. Descriptions are focused on the user interface of the KIRDOCM. Introduction addresses the background and concept of the development. The main chapter describes the user requirements, the analysis of computing environment, the design of KIRDOCM, the implementation of the KIRDOCM, user's manual of KIRDOCM and the maintenance of the KIRDOCM for future improvement. The implementation of KIRDOCM system provides the efficiency in the management, maintenance and indexing of the technical documents. And, it is expected that KIRDOCM may be a good reference in applying Visual Foxpro for the development of information management system. (author). 2 tabs., 13 figs., 8 refs

  16. NCI's Distributed Geospatial Data Server

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Earth systems, environmental and geophysics datasets are an extremely valuable source of information about the state and evolution of the Earth. However, different disciplines and applications require this data to be post-processed in different ways before it can be used. For researchers experimenting with algorithms across large datasets or combining multiple data sets, the traditional approach to batch data processing and storing all the output for later analysis rapidly becomes unfeasible, and often requires additional work to publish for others to use. Recent developments on distributed computing using interactive access to significant cloud infrastructure opens the door for new ways of processing data on demand, hence alleviating the need for storage space for each individual copy of each product. The Australian National Computational Infrastructure (NCI) has developed a highly distributed geospatial data server which supports interactive processing of large geospatial data products, including satellite Earth Observation data and global model data, using flexible user-defined functions. This system dynamically and efficiently distributes the required computations among cloud nodes and thus provides a scalable analysis capability. In many cases this completely alleviates the need to preprocess and store the data as products. This system presents a standards-compliant interface, allowing ready accessibility for users of the data. Typical data wrangling problems such as handling different file formats and data types, or harmonising the coordinate projections or temporal and spatial resolutions, can now be handled automatically by this service. The geospatial data server exposes functionality for specifying how the data should be aggregated and transformed. The resulting products can be served using several standards such as the Open Geospatial Consortium's (OGC) Web Map Service (WMS) or Web Feature Service (WFS), Open Street Map tiles, or raw binary arrays under

  17. SU-E-P-26: Oncospace: A Shared Radiation Oncology Database System Designed for Personalized Medicine, Decision Support, and Research

    International Nuclear Information System (INIS)

    Bowers, M; Robertson, S; Moore, J; Wong, J; DeWeese, T; McNutt, T; Phillips, M; Hendrickson, K; Song, W; Kwok, P

    2015-01-01

    Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcome data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace

  18. SU-E-P-26: Oncospace: A Shared Radiation Oncology Database System Designed for Personalized Medicine, Decision Support, and Research

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, M; Robertson, S; Moore, J; Wong, J; DeWeese, T; McNutt, T [Johns Hopkins University, Baltimore, MD (United States); Phillips, M [Univ Washington, Seattle, WA (United States); Hendrickson, K [University of Washington, Seattle, WA (United States); Song, W; Kwok, P [Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, U of T, Toronto, Ontario (Canada)

    2015-06-15

    Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcome data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace

  19. Getting started with Oracle WebLogic Server 12c developer's guide

    CERN Document Server

    Nunes, Fabio Mazanatti

    2013-01-01

    Getting Started with Oracle WebLogic Server 12c is a fast-paced and feature-packed book, designed to get you working with Java EE 6, JDK 7 and Oracle WebLogic Server 12c straight away, so start developing your own applications.Getting Started with Oracle WebLogic Server 12c: Developer's Guide is written for developers who are just getting started, or who have some experience, with Java EE who want to learn how to develop for and use Oracle WebLogic Server. Getting Started with Oracle WebLogic Server 12c: Developer's Guide also provides a great overview of the updated features of the 12c releas

  20. Advances in the design, development, and deployment of the U.S. Army Research Laboratory (ARL) multimodal signatures database

    Science.gov (United States)

    Bennett, Kelly; Robertson, James

    2011-06-01

    Recent advances in the design, development, and deployment of U.S. Army Research Laboratory's (ARL) Multimodal Signature Database (MMSDB) create a state-of-the-art database system with Web-based access through a Web interface designed specifically for research and development. Tens of thousands of signatures are currently available for researchers to support their algorithm development and refinement for sensors and other security systems. Each dataset is stored in (Hierarchical Data Format 5 (HDF5) format for easy modeling and storing of signatures and archived sensor data, ground truth, calibration information, algorithms, and other documentation. Archived HDF5 formatted data provides the basis for computational interoperability across a variety of tools including MATLAB, Octave, and Python. The database has a Web-based front-end with public and restricted access interfaces, along with 24/7 availability and support. This paper describes the overall design of the system, and the recent enhancements and future vision, including the ability for researchers to share algorithms, data, and documentation in the cloud, and providing an ability to run algorithms and software for testing and evaluation purposes remotely across multiple domains and computational tools. The paper will also describe in detail the HDF5 format for several multimodal sensor types.

  1. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    Science.gov (United States)

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  2. Design of security scheme of the radiotherapy planning administration system based on the hospital information system

    International Nuclear Information System (INIS)

    Zhuang Yongzhi; Zhao Jinzao

    2010-01-01

    Objective: To design a security scheme of radiotherapy planning administration system. Methods: Power Builder 9i language was used to program the system through the model of client-server machine. Oracle 9i was used as the database server. Results In this system, user registration management, user login management, application-level functions of control, database access control, and audit trail were designed to provide system security. Conclusions: As a prototype for the security analysis and protection of this scheme provides security of the system, application system, important data and message, which ensures the system work normally. (authors)

  3. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    2009-01-01

    This paper considers polling systems with an autonomous server that remain at a queue for an exponential amount of time before moving to a next queue incurring a generally distributed switch-over time. The server remains at a queue until the exponential visit time expires, also when the queue

  4. A tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he

  5. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  6. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  7. Design of Nutrition Catering System for Athletes Based on Access Database

    OpenAIRE

    Hongjiang Wu,; Haiyan Zhao; Xugang Liu; Mingshun Xing

    2015-01-01

    In order to monitor and adjust athletes' dietary nutrition scientifically, Active X Data Object (ADO) and Structure Query Language (SQL) were used to produce program under the development environment of Visual Basic 6.0 and Access database. The consulting system on food nutrition and dietary had been developed with the two languages combination and organization of the latest nutrition information. Nutrition balance of physiological characteristics, assessment for nutrition intake, inquiring n...

  8. Using AMDD method for Database Design in Mobile Cloud Computing Systems

    OpenAIRE

    Silviu Claudiu POPA; Mihai-Constantin AVORNICULUI; Vasile Paul BRESFELEAN

    2013-01-01

    The development of the technologies of wireless telecommunications gave birth of new kinds of e-commerce, the so called Mobile e-Commerce or m-Commerce. Mobile Cloud Computing (MCC) represents a new IT research area that combines mobile computing and cloud compu-ting techniques. Behind a cloud mobile commerce system there is a database containing all necessary information for transactions. By means of Agile Model Driven Development (AMDD) method, we are able to achieve many benefits that smoo...

  9. SciServer Compute brings Analysis to Big Data in the Cloud

    Science.gov (United States)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  10. Programming database tools for the casual user

    International Nuclear Information System (INIS)

    Katz, R.A; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database management system (INTERBASE) for the storage of all data associated with the control of the particle accelerator complex. This includes the static data which describes the component devices of the complex, as well as data for application program startup and data records that are used in analysis. Due to licensing restraints, it was necessary to develop tools to allow programs requiring access to a database to be unconcerned whether or not they were running on a licensed node. An in-house database server program was written, using Apollo mailbox communication protocols, allowing application programs via calls to this server to access the interbase database. Initially, the tools used by the server to actually access the database were written using the GDML C host language interface. Through the evolutionary learning process these tools have been converted to Dynamic SQL. Additionally, these tools have been extracted from the exclusive province of the database server and placed in their own library. This enables application programs to use these same tools on a licensed node without using the database server and without having to modify the application code. The syntax of the C calls remain the same

  11. Preliminary design of the database and registration system for the national malignant tumor interventional therapy

    International Nuclear Information System (INIS)

    Hu Di; Zeng Jinjin; Wang Jianfeng; Zhai Renyou

    2010-01-01

    Objective: This research is one of the sub-researches of 'The comparative study of the standards of interventional therapies and the evaluation of the long-term and middle-term effects for common malignant tumors', which is one of the National Key Technologies R and D Program in the eleventh five-year plan. Based on the project,the authors need to establish an international standard in order to set up the national tumor interventional therapy database and registration system. Methods: By using the computing programs of downloading software, self-management and automatic integration, the program was written by the JAVA words. Results: The database and registration system for the national tumor interventional therapy was successfully set up, and it could complete both the simple and complex inquiries. The software worked well through the initial debugging. Conclusion: The national tumor interventional therapy database and registration system can not only precisely tell the popularizing rate of the interventional therapy nationwide, compare the results of different methods, provide the latest news concerning the interventional therapy, subsequently promote the academic exchanges between hospitals, but also help us get the information about the distribution of the interventional physicians, the consuming quantity and variety of the interventional materials, so the medical costs can be reduced. (authors)

  12. Applications of Protein Thermodynamic Database for Understanding Protein Mutant Stability and Designing Stable Mutants.

    Science.gov (United States)

    Gromiha, M Michael; Anoosha, P; Huang, Liang-Tsung

    2016-01-01

    Protein stability is the free energy difference between unfolded and folded states of a protein, which lies in the range of 5-25 kcal/mol. Experimentally, protein stability is measured with circular dichroism, differential scanning calorimetry, and fluorescence spectroscopy using thermal and denaturant denaturation methods. These experimental data have been accumulated in the form of a database, ProTherm, thermodynamic database for proteins and mutants. It also contains sequence and structure information of a protein, experimental methods and conditions, and literature information. Different features such as search, display, and sorting options and visualization tools have been incorporated in the database. ProTherm is a valuable resource for understanding/predicting the stability of proteins and it can be accessed at http://www.abren.net/protherm/ . ProTherm has been effectively used to examine the relationship among thermodynamics, structure, and function of proteins. We describe the recent progress on the development of methods for understanding/predicting protein stability, such as (1) general trends on mutational effects on stability, (2) relationship between the stability of protein mutants and amino acid properties, (3) applications of protein three-dimensional structures for predicting their stability upon point mutations, (4) prediction of protein stability upon single mutations from amino acid sequence, and (5) prediction methods for addressing double mutants. A list of online resources for predicting has also been provided.

  13. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  14. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    Science.gov (United States)

    Markiewicz, Tomasz

    2011-03-30

    database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8 GHz 4 GB RAM server with 768x576 pixel size, 1.28 Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server - 3.5 seconds, at remote analysis - 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system.

  15. Development of assessment system for tank earthquake-proof design (ASTEP code) installing automatic operation and knowledge database

    International Nuclear Information System (INIS)

    Maekawa, Akira; Suzuki, Michiaki; Fujii, Yuzo

    2004-01-01

    In a nuclear power station, seismic-proof design of the various tanks classified as auxiliary installation are required to follow technical guideline for the seismic-proof design of nuclear power station, which is called JEAC4601 for short in below. This guideline uses simple mechanical multi-mass model but a rather complicated evaluation method requires designers to have knowledge and experience and consumes both time and labor. On purpose to resolve those difficulties, Assessment System for Tank Earthquake-Proof Design, which is called ASTEP in short, has been developed and equipped with automated process and knowledge database. For this system, the targeted types of tank are a vertical cylindrical tank that has four supports or a skirt support, a horizontal cylindrical tank that has two saddle supports, and vertical cylindrical tank or water storage tank with a flat bottom. The system integrated all the seismic-proof design evaluation related tools and equipped with step by step menus in order of the flowchart, so enables designers to use them easily. In addition, it has a input aid that enables users to input with ease and a tool that automatically calculates input parameters. So this system reduces seismic-proof design evaluation related work load dramatically and also does not require much knowledge and experience related to this field. Further more, this system organized seismic-proof design related past statement and technical documents as a knowledge database so user could obtain the identical output as of the manual calculation results. Comparing output of ASTEP code and the manual calculation results of a typical tank that requires government approval of its design evaluation document, the error was within less than a percent so validity of the system was confirmed. This system has gained favorable comment during the trial run, and it was beyond our expectation. (author)

  16. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  17. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-01-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments

  18. Barcode server: a visualization-based genome analysis system.

    Directory of Open Access Journals (Sweden)

    Fenglou Mao

    Full Text Available We have previously developed a computational method for representing a genome as a barcode image, which makes various genomic features visually apparent. We have demonstrated that this visual capability has made some challenging genome analysis problems relatively easy to solve. We have applied this capability to a number of challenging problems, including (a identification of horizontally transferred genes, (b identification of genomic islands with special properties and (c binning of metagenomic sequences, and achieved highly encouraging results. These application results inspired us to develop this barcode-based genome analysis server for public service, which supports the following capabilities: (a calculation of the k-mer based barcode image for a provided DNA sequence; (b detection of sequence fragments in a given genome with distinct barcodes from those of the majority of the genome, (c clustering of provided DNA sequences into groups having similar barcodes; and (d homology-based search using Blast against a genome database for any selected genomic regions deemed to have interesting barcodes. The barcode server provides a job management capability, allowing processing of a large number of analysis jobs for barcode-based comparative genome analyses. The barcode server is accessible at http://csbl1.bmb.uga.edu/Barcode.

  19. The new protein topology graph library web server.

    Science.gov (United States)

    Schäfer, Tim; Scheck, Andreas; Bruneß, Daniel; May, Patrick; Koch, Ina

    2016-02-01

    We present a new, extended version of the Protein Topology Graph Library web server. The Protein Topology Graph Library describes the protein topology on the super-secondary structure level. It allows to compute and visualize protein ligand graphs and search for protein structural motifs. The new server features additional information on ligand binding to secondary structure elements, increased usability and an application programming interface (API) to retrieve data, allowing for an automated analysis of protein topology. The Protein Topology Graph Library server is freely available on the web at http://ptgl.uni-frankfurt.de. The website is implemented in PHP, JavaScript, PostgreSQL and Apache. It is supported by all major browsers. The VPLG software that was used to compute the protein ligand graphs and all other data in the database is available under the GNU public license 2.0 from http://vplg.sourceforge.net. tim.schaefer@bioinformatik.uni-frankfurt.de; ina.koch@bioinformatik.uni-frankfurt.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. PANNZER2: a rapid functional annotation web server.

    Science.gov (United States)

    Törönen, Petri; Medlar, Alan; Holm, Liisa

    2018-05-08

    The unprecedented growth of high-throughput sequencing has led to an ever-widening annotation gap in protein databases. While computational prediction methods are available to make up the shortfall, a majority of public web servers are hindered by practical limitations and poor performance. Here, we introduce PANNZER2 (Protein ANNotation with Z-scoRE), a fast functional annotation web server that provides both Gene Ontology (GO) annotations and free text description predictions. PANNZER2 uses SANSparallel to perform high-performance homology searches, making bulk annotation based on sequence similarity practical. PANNZER2 can output GO annotations from multiple scoring functions, enabling users to see which predictions are robust across predictors. Finally, PANNZER2 predictions scored within the top 10 methods for molecular function and biological process in the CAFA2 NK-full benchmark. The PANNZER2 web server is updated on a monthly schedule and is accessible at http://ekhidna2.biocenter.helsinki.fi/sanspanz/. The source code is available under the GNU Public Licence v3.

  1. Microsoft SQL Server 2014 Business Intelligence development beginner's guide

    CERN Document Server

    Rad, Reza

    2014-01-01

    Written in an easy-to-follow, example-driven format, there are plenty of step-by-step instructions to help get you started! The book has a friendly approach, with the opportunity to learn by experimenting.If you are a BI and Data Warehouse developer new to Microsoft Business Intelligence, and looking to get a good understanding of the different components of Microsoft SQL Server for Business Intelligence, this book is for you.It's assumed that you will have some experience in databases systems and T-SQL. This book is will give you a good upshot view of each component and scenarios featuring th

  2. Monitoring of small laboratory animal experiments by a designated web-based database.

    Science.gov (United States)

    Frenzel, T; Grohmann, C; Schumacher, U; Krüll, A

    2015-10-01

    Multiple-parametric small animal experiments require, by their very nature, a sufficient number of animals which may need to be large to obtain statistically significant results.(1) For this reason database-related systems are required to collect the experimental data as well as to support the later (re-) analysis of the information gained during the experiments. In particular, the monitoring of animal welfare is simplified by the inclusion of warning signals (for instance, loss in body weight >20%). Digital patient charts have been developed for human patients but are usually not able to fulfill the specific needs of animal experimentation. To address this problem a unique web-based monitoring system using standard MySQL, PHP, and nginx has been created. PHP was used to create the HTML-based user interface and outputs in a variety of proprietary file formats, namely portable document format (PDF) or spreadsheet files. This article demonstrates its fundamental features and the easy and secure access it offers to the data from any place using a web browser. This information will help other researchers create their own individual databases in a similar way. The use of QR-codes plays an important role for stress-free use of the database. We demonstrate a way to easily identify all animals and samples and data collected during the experiments. Specific ways to record animal irradiations and chemotherapy applications are shown. This new analysis tool allows the effective and detailed analysis of huge amounts of data collected through small animal experiments. It supports proper statistical evaluation of the data and provides excellent retrievable data storage. © The Author(s) 2015.

  3. Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory

    Science.gov (United States)

    2009-01-01

    Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third

  4. Using the Textpresso Site-Specific Recombinases Web server to identify Cre expressing mouse strains and floxed alleles.

    Science.gov (United States)

    Condie, Brian G; Urbanski, William M

    2014-01-01

    Effective tools for searching the biomedical literature are essential for identifying reagents or mouse strains as well as for effective experimental design and informed interpretation of experimental results. We have built the Textpresso Site Specific Recombinases (Textpresso SSR) Web server to enable researchers who use mice to perform in-depth searches of a rapidly growing and complex part of the mouse literature. Our Textpresso Web server provides an interface for searching the full text of most of the peer-reviewed publications that report the characterization or use of mouse strains that express Cre or Flp recombinase. The database also contains most of the publications that describe the characterization or analysis of strains carrying conditional alleles or transgenes that can be inactivated or activated by site-specific recombinases such as Cre or Flp. Textpresso SSR complements the existing online databases that catalog Cre and Flp expression patterns by providing a unique online interface for the in-depth text mining of the site specific recombinase literature.

  5. Catalytic site identification—a web server to identify catalytic site structural matches throughout PDB

    Science.gov (United States)

    Kirshner, Daniel A.; Nilmeier, Jerome P.; Lightstone, Felice C.

    2013-01-01

    The catalytic site identification web server provides the innovative capability to find structural matches to a user-specified catalytic site among all Protein Data Bank proteins rapidly (in less than a minute). The server also can examine a user-specified protein structure or model to identify structural matches to a library of catalytic sites. Finally, the server provides a database of pre-calculated matches between all Protein Data Bank proteins and the library of catalytic sites. The database has been used to derive a set of hypothesized novel enzymatic function annotations. In all cases, matches and putative binding sites (protein structure and surfaces) can be visualized interactively online. The website can be accessed at http://catsid.llnl.gov. PMID:23680785

  6. Design Integration of Man-Machine Interface (MMI) Display Drawings and MMI Database

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Jun; Seo, Kwang Rak; Song, Jeong Woog; Kim, Dae Ho; Han, Jung A [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    The conventional Main Control Room (MCR) was designed using hardwired controllers and analog indications mounted on control boards for control and acquisition of plant information. This is compared with advanced MCR design where Flat Panel Displays (FPDs) with soft controls and mimic displays are used. The advanced design needs MMI display drawings replacing the conventional control board layout drawings and component lists. The data is linked to related object of the MMI displays. Compilation of the data into the DB is generally done manually, which tends to introduce errors and discrepancies. Also, updating and managing is difficult due to a huge number of entries in the DB and the update must closely track the changes in the associated drawing. Therefore, automating the DB update whenever a related drawing is updated would be quite beneficial. An attempt is made to develop a new method to integrate the MMIS display drawing design and the DB management. This would significantly reduce the amount of errors and improve design quality. The design integration of the MMI Display drawing and MMI DB is explained briefly but concisely in this paper. The existing method involved individually and separately inputting design data for the MMI display drawings. This caused to the potential problem of data discrepancies and errors as well as the update time lag between related drawings and the DB. This led to development of an integration of design process which automates the design data input activity.

  7. Pictorial materials database: 1200 combinations of pigments, dyes, binders and varnishes designed as a tool for heritage science and conservation

    Science.gov (United States)

    Cavaleri, Tiziana; Buscaglia, Paola; Migliorini, Simonetta; Nervo, Marco; Piccablotto, Gabriele; Piccirillo, Anna; Pisani, Marco; Puglisi, Davide; Vaudan, Dario; Zucco, Massimo

    2017-06-01

    The conservation of artworks requires a profound knowledge about pictorial materials, their chemical and physical properties and their interaction and/or degradation processes. For this reason, pictorial materials databases are widely used to study and investigate cultural heritage. At Centre for Conservation and Restoration La Venaria Reale, we prepared a set of about 1200 mock-ups with 173 different pigments and/or dyes, used across all the historical times or as products for conservation, four binders, two varnishes and four different materials for underdrawings. In collaboration with the Laboratorio Analisi Scientifiche of Regione Autonoma Valle d'Aosta, the National Institute of Metrological Research and the Department of Architecture and Design of the Polytechnic of Turin, we created a scientific database that is now available online (http://www.centrorestaurovenaria.it/en/areas/diagnostic/pictorial-materials-database) designed as a tool for heritage science and conservation. Here, we present a focus on materials for pictorial retouching where the hyperspectral imaging application, conducted with a prototype of new technology, allowed to provide a list of pigments that could be more suitable for conservation treatments and pictorial retouching. Then we present the case study of the industrial painting Notte Barbara (1962) by Pinot Gallizio where the use of the database including modern and contemporary art materials showed to be very useful and where the fibre optics reflectance spectroscopy technique was decisive for pigment identification purpose. Later in this research, the mock-ups will be exploited to study degradation processes, e.g., the lightfastness, or the possible formation of interaction products, e.g., metal carboxylates.

  8. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  9. Environment server. Digital field information archival technology

    International Nuclear Information System (INIS)

    Kita, Nobuyuki; Kita, Yasuyo; Yang, Hai-quan

    2002-01-01

    For the safety operation of nuclear power plants, it is important to store various information about plants for a long period and visualize those stored information as desired. The system called Environment Server is developed for realizing it. In this paper, the general concepts of Environment Server is explained and its partial implementation for archiving the image information gathered by inspection mobile robots into virtual world and visualizing them is described. An extension of Environment Server for supporting attention sharing is also briefly introduced. (author)

  10. Training to Increase Safe Tray Carrying Among Cocktail Servers

    OpenAIRE

    Scherrer, Megan D; Wilder, David A

    2008-01-01

    We evaluated the effects of training on proper carrying techniques among 3 cocktail servers to increase safe tray carrying on the job and reduce participants' risk of developing musculoskeletal disorders. As participants delivered drinks to their tables, their finger, arm, and neck positions were observed and recorded. Each participant received individual safety training that focused on proper carrying positions and techniques after baseline data were collected. A multiple baseline design acr...

  11. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  12. Handling of network and database instabilities in CORAL

    International Nuclear Information System (INIS)

    Trentadue, R; Valassi, A; Kalkhof, A

    2012-01-01

    The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several back-ends and deployment models, direct client access to Oracle servers being one of the most important use cases. Since 2010, several problems have been reported by the LHC experiments in their use of Oracle through CORAL, involving application errors, hangs or crashes after the network or the database servers became temporarily unavailable. CORAL already provided some level of handling of these instabilities, which are due to external causes and cannot be avoided, but this proved to be insufficient in some cases and to be itself the cause of other problems, such as the hangs and crashes mentioned before, in other cases. As a consequence, a major redesign of the CORAL plugins has been implemented, with the aim of making the software more robust against these database and network glitches. The new implementation ensures that CORAL automatically reconnects to Oracle databases in a transparent way whenever possible and gently terminates the application when this is not possible. Internally, this is done by resetting all relevant parameters of the underlying back-end technology (OCI, the Oracle Call Interface). This presentation reports on the status of this work at the time of the CHEP2012 conference, covering the design and implementation of these new features and the outlook for future developments in this area.

  13. jSPyDB, an open source database-independent tool for data management

    CERN Document Server

    Pierro, Giuseppe Antonio

    2010-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different Database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. ...

  14. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    Science.gov (United States)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on

  15. Rdesign: A data dictionary with relational database design capabilities in Ada

    Science.gov (United States)

    Lekkos, Anthony A.; Kwok, Teresa Ting-Yin

    1986-01-01

    Data Dictionary is defined to be the set of all data attributes, which describe data objects in terms of their intrinsic attributes, such as name, type, size, format and definition. It is recognized as the data base for the Information Resource Management, to facilitate understanding and communication about the relationship between systems applications and systems data usage and to help assist in achieving data independence by permitting systems applications to access data knowledge of the location or storage characteristics of the data in the system. A research and development effort to use Ada has produced a data dictionary with data base design capabilities. This project supports data specification and analysis and offers a choice of the relational, network, and hierarchical model for logical data based design. It provides a highly integrated set of analysis and design transformation tools which range from templates for data element definition, spreadsheet for defining functional dependencies, normalization, to logical design generator.

  16. Structure analysis of designated hospitals for cancer control in Japan from JASTRO census survey database 2005

    International Nuclear Information System (INIS)

    Ikeda, Hiroshi; Kagami, Yoshikazu; Nishio, Masamichi; Kataoka, Masaaki; Matsumoto, Yasuo; Hatano, Kazuo; Ogino, Takashi

    2008-01-01

    The structures of 288 hospitals designated for cancer control and approved by Ministry of Health, Labour and Welfare in February 2006 were analyzed from radiotherapy aspects according to the Japanese Society for Therapeutic Radiology and Oncology (JASTRO) 2005 census survey data. The data were compiled from 266 hospitals. Overall 78,086 new patients were treated at these designated hospitals, which accounts for just a half the total number of patients in Japan. The structure of radiotherapy (RT) must be essential for cancer management, and our study showed the designated hospitals are insufficient in the RT requirement. No RT equipment is installed in 14 hospitals. Of 266, 109 hospitals treated less than 200 new patients, and 25 hospitals less than 100 in 2005. The data analysis revealed that academic hospitals, JACC hospitals and others are reasonable in terms of structures and capacity of radiotherapy. Moreover, both academic and JACC hospitals play similar roles to designated prefectural hospitals in cancer management by radiotherapy. (author)

  17. TAPIR, a web server for the prediction of plant microRNA targets, including target mimics.

    Science.gov (United States)

    Bonnet, Eric; He, Ying; Billiau, Kenny; Van de Peer, Yves

    2010-06-15

    We present a new web server called TAPIR, designed for the prediction of plant microRNA targets. The server offers the possibility to search for plant miRNA targets using a fast and a precise algorithm. The precise option is much slower but guarantees to find less perfectly paired miRNA-target duplexes. Furthermore, the precise option allows the prediction of target mimics, which are characterized by a miRNA-target duplex having a large loop, making them undetectable by traditional tools. The TAPIR web server can be accessed at: http://bioinformatics.psb.ugent.be/webtools/tapir. Supplementary data are available at Bioinformatics online.

  18. Techno-experiential design assessment and media experience database: A method for emerging technology assessment

    OpenAIRE

    Schick, Dan

    2005-01-01

    This thesis evaluates the Techno-Experiential Design Assessment (TEDA) for social research on new media and emerging technology. Dr. Roman Onufrijchuk developed TEDA to address the shortcomings of current methods designed for studying existing technologies. Drawing from the ideas of Canadian media theorist Marshall McLuhan, TEDA focuses on the environmental changes introduced by a new technology into a user's life. I describe the key components of the TEDA methodology and provide examples of ...

  19. A practical neutron shielding design based on data-base interpolation

    International Nuclear Information System (INIS)

    Jiang, S.H.; Sheu, R.J.

    1993-01-01

    Neutron shielding design is an important part of the construction of nuclear reactors and high-energy accelerators. Neutron shielding design is also indispensable in the packaging and storage of isotopic neutron sources. Most efforts in the development of neutron shielding design have been concentrated on nuclear reactor shielding because of its huge mass and strict requirement of accuracy. Sophisticated computational tools, such as transport and Monte Carlo codes and detailed data libraries have been developed. In principle, now, neutron shielding, in spite of its complexity, can be designed in any detail and with fine accuracy. However, in most practical cases, neutron shielding design is accomplished with simplified methods. Unlike practical gamma-ray shielding design, where exponential attenuation coupled with buildup factors has been applied effectively and accurately, simplified neutron shielding design, either by using removal cross sections or by applying charts or tables of transmission factors such as the National Council on Radiation Protection and Measurements (NCRP) 38 (Ref. 1) for general neutron protection or to NCRP 51 (Ref. 2) for accelerator neutron shielding, is still very primitive and not well established. The available data are limited in energy range, materials, and thicknesses, and the estimated results are only roughly accurate. It is the purpose of this work to establish a simple, convenient, and user-friendly general-purpose computational tool for practical preliminary neutron shielding design that is reasonably accurate. A wide-range (energy, material, and thickness) data base of dose transmission factors has been generated by applying one-dimensional transport calculations in slab geometry

  20. PostgreSQL server programming

    CERN Document Server

    Dar, Usama; Mlodgenski, Jim; Roybal, Kirk

    2015-01-01

    This book is for moderate to advanced PostgreSQL database professionals who wish to extend PostgreSQL, utilizing the most updated features of PostgreSQL 9.4. For a better understanding of this book, familiarity with writing SQL, a basic idea of query tuning, and some coding experience in your preferred language is expected.