WorldWideScience

Sample records for server log information

  1. Using Web Server Logs in Evaluating Instructional Web Sites.

    Science.gov (United States)

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  2. Analysis of Web Server Log Files: Website of Information Management Department of Hacettepe University

    Directory of Open Access Journals (Sweden)

    Mandana Mir Moftakhari

    2015-09-01

    Full Text Available Over the last decade, the importance of analysing information management systems logs has grown, because it has proved that results of the analysing log data can help developing in information system design, interface and architecture of websites. Log file analysis is one of the best ways in order to understand information-searching process of online searchers, users’ needs, interests, knowledge, and prejudices. The utilization of data collected in transaction logs of web search engines helps designers, researchers and web site managers to find complex interactions of users’ goals and behaviours to increase efficiency and effectiveness of websites. Before starting any analysis it should be observed that the log file of the web site contain enough information, otherwise analyser wouldn’t be able to create complete report. In this study we evaluate the website of Information Management Department of Hacettepe University by analysing the server log files. Results show that there is not adequate amount of information in log files which are provided by web site server. The reports which we have created have some information about users’ behaviour and need but they are not sufficient for taking ideal decisions about contents & hyperlink structure of website. It also provides that creating an extended log file is essential for the website. Finally we believe that results can be helpful to improve, redesign and create better website.

  3. Using Web Server Logs to Track Users through the Electronic Forest

    Science.gov (United States)

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  4. Conversation Threads Hidden within Email Server Logs

    Science.gov (United States)

    Palus, Sebastian; Kazienko, Przemysław

    Email server logs contain records of all email Exchange through this server. Often we would like to analyze those emails not separately but in conversation thread, especially when we need to analyze social network extracted from those email logs. Unfortunately each mail is in different record and those record are not tided to each other in any obvious way. In this paper method for discussion threads extraction was proposed together with experiments on two different data sets - Enron and WrUT..

  5. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  6. Using ‘search transitions’ to study searchers investment of effort: experiences with client and server side logging

    OpenAIRE

    Pharo, Nils; Nordlie, Ragnar

    2013-01-01

    We are investigating the value of using the concept ‘search transition’ for studying effort invested in information search processes. In this paper we present findings from a comparative study of data collected from client and server side loggings. The purpose is to see what factors of effort can be captured from the two logging methods. The data stems from studies of searchers interaction with an XML information retrieval system. The searchers interaction was simultaneously logged by a scree...

  7. Mining the SDSS SkyServer SQL queries log

    Science.gov (United States)

    Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani

    2016-05-01

    SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.

  8. Getting to the Source: a Survey of Quantitative Data Sources Available to the Everyday Librarian: Part 1: Web Server Log Analysis

    Directory of Open Access Journals (Sweden)

    Lisa Goddard

    2007-03-01

    Full Text Available This is the first part of a two‐part article that provides a survey of data sources which are likely to be immediately available to the typical practitioner who wishes to engage instatistical analysis of collections and services within his or her own library. Part I outlines the data elements which can be extracted from web server logs, and discusses web log analysis tools. Part II looks at logs, reports, and data sources from proxy servers, resource vendors, link resolvers, federated search engines, institutional repositories, electronic reference services, and the integrated library system.

  9. Environment server. Digital field information archival technology

    International Nuclear Information System (INIS)

    Kita, Nobuyuki; Kita, Yasuyo; Yang, Hai-quan

    2002-01-01

    For the safety operation of nuclear power plants, it is important to store various information about plants for a long period and visualize those stored information as desired. The system called Environment Server is developed for realizing it. In this paper, the general concepts of Environment Server is explained and its partial implementation for archiving the image information gathered by inspection mobile robots into virtual world and visualizing them is described. An extension of Environment Server for supporting attention sharing is also briefly introduced. (author)

  10. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    Science.gov (United States)

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  11. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  12. Windows server cookbook for Windows server 2003 and Windows 2000

    CERN Document Server

    Allen, Robbie

    2005-01-01

    This practical reference guide offers hundreds of useful tasks for managing Windows 2000 and Windows Server 2003, Microsoft's latest server. These concise, on-the-job solutions to common problems are certain to save you many hours of time searching through Microsoft documentation. Topics include files, event logs, security, DHCP, DNS, backup/restore, and more

  13. BEBAN JARINGAN SAAT MENGAKSES EMAIL DARI BEBERAPA MAIL SERVER

    Directory of Open Access Journals (Sweden)

    Husni Thamrin

    2017-01-01

    Full Text Available Expensive internet facilities require prudent in its use both as a source of information and communication media. This paper discusses observations of the perceived burden of network bandwidth when accessing some of the mail server using a webmail application. Mail server in question consists of three commercial server and 2 non-commercial server. Data when it download home page, while logged in, open the email, and during idle logout recorded with sniffer Wireshark. Observations in various situations and scenarios indicate that access Yahoo email gives the network load is very high while the SquirrelMail gives the network load is very low than 5 other mail servers. For an institution, use a local mail server (institutional is highly recommended in the context of banddwidth savings.

  14. Detection of attack-targeted scans from the Apache HTTP Server access logs

    Directory of Open Access Journals (Sweden)

    Merve Baş Seyyar

    2018-01-01

    Full Text Available A web application could be visited for different purposes. It is possible for a web site to be visited by a regular user as a normal (natural visit, to be viewed by crawlers, bots, spiders, etc. for indexing purposes, lastly to be exploratory scanned by malicious users prior to an attack. An attack targeted web scan can be viewed as a phase of a potential attack and can lead to more attack detection as compared to traditional detection methods. In this work, we propose a method to detect attack-oriented scans and to distinguish them from other types of visits. In this context, we use access log files of Apache (or ISS web servers and try to determine attack situations through examination of the past data. In addition to web scan detections, we insert a rule set to detect SQL Injection and XSS attacks. Our approach has been applied on sample data sets and results have been analyzed in terms of performance measures to compare our method and other commonly used detection techniques. Furthermore, various tests have been made on log samples from real systems. Lastly, several suggestions about further development have been also discussed.

  15. Analyzing Log Files using Data-Mining

    Directory of Open Access Journals (Sweden)

    Marius Mihut

    2008-01-01

    Full Text Available Information systems (i.e. servers, applications and communication devices create a large amount of monitoring data that are saved as log files. For analyzing them, a data-mining approach is helpful. This article presents the steps which are necessary for creating an ‘analyzing instrument’, based on an open source software called Waikato Environment for Knowledge Analysis (Weka [1]. For exemplification, a system log file created by a Windows-based operating system, is used as input file.

  16. CMLOG: A common message logging system

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Bickley, M.; Wu, D.; Watson, W. III

    1997-01-01

    The Common Message Logging (CMLOG) system is an object-oriented and distributed system that not only allows applications and systems to log data (messages) of any type into a centralized database but also lets applications view incoming messages in real-time or retrieve stored data from the database according to selection rules. It consists of a concurrent Unix server that handles incoming logging or searching messages, a Motif browser that can view incoming messages in real-time or display stored data in the database, a client daemon that buffers and sends logging messages to the server, and libraries that can be used by applications to send data to or retrieve data from the database via the server. This paper presents the design and implementation of the CMLOG system meanwhile it will also address the issue of integration of CMLOG into existing control systems. CMLOG into existing control systems

  17. A polylogarithmic competitive algorithm for the k-server problem

    NARCIS (Netherlands)

    Bansal, N.; Buchbinder, N.; Madry, A.; Naor, J.

    2011-01-01

    We give the first polylogarithmic-competitive randomized online algorithm for the $k$-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log^3 n log^2 k log log n) for any metric space on n points. Our algorithm improves upon the

  18. Clustering of users of digital libraries through log file analysis

    Directory of Open Access Journals (Sweden)

    Juan Antonio Martínez-Comeche

    2017-09-01

    Full Text Available This study analyzes how users perform information retrieval tasks when introducing queries to the Hispanic Digital Library. Clusters of users are differentiated based on their distinct information behavior. The study used the log files collected by the server over a year and different possible clustering algorithms are compared. The k-means algorithm is found to be a suitable clustering method for the analysis of large log files from digital libraries. In the case of the Hispanic Digital Library the results show three clusters of users and the characteristic information behavior of each group is described.

  19. [Radiology information system using HTML, JavaScript, and Web server].

    Science.gov (United States)

    Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y

    1997-12-01

    We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.

  20. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    Science.gov (United States)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  1. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  2. Advances in the development and application of an open source model server for building information

    NARCIS (Netherlands)

    Beetz, J.; van Berlo, L.A.H.M.; Laat, de R.; Bonsma, P.

    2011-01-01

    The need for Building Information Model (BIM) servers to facilitate collaboration has been repeatedly reported in literature and stated by industry practitioners. To date, only a few commercial implementations of model servers are available. However, these applications are either limited to

  3. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  4. Multimedia medical data archive and retrieval server on the Internet

    Science.gov (United States)

    Komo, Darmadi; Levine, Betty A.; Freedman, Matthew T.; Mun, Seong K.; Tang, Y. K.; Chiang, Ted T.

    1997-05-01

    The Multimedia Medical Data Archive and Retrieval Server has been installed at the imaging science and information systems (ISIS) center in Georgetown University Medical Center to provide medical data archive and retrieval support for medical researchers. The medical data includes text, images, sound, and video. All medical data is keyword indexed using a database management system and placed temporarily in a staging area and then transferred to a StorageTek one terabyte tape library system with a robotic arm for permanent archive. There are two methods of interaction with the system. The first method is to use a web browser with HTML functions to perform insert, query, update, and retrieve operations. These generate dynamic SQL calls to the database and produce StorageTek API calls to the tape library. The HTML functions consist of a database, StorageTek interface, HTTP server, common gateway interface, and Java programs. The second method is to issue a DICOM store command, which is translated by the system's DICOM server to SQL calls and then produce StorageTek API calls to the tape library. The system performs as both an Internet and a DICOM server using standard protocols such as HTTP, HTML, Java, and DICOM. Users with proper authentication can log on to the server from anywhere on the Internet using a standard web browser resulting in a user-friendly, open environment, and platform independent solution for archiving multimedia medical data. It represents a complex integration of different components including a robotic tape storage system, database, user-interface, WWW protocols, and TCP/IP networking. The user will only deal with the WWW and DICOM server components of the system, the database and robotic tape library system are transparent and the user will not know that the medical data is stored on magnetic tapes. The server provides the researchers a cost-effective tool for archiving and retrieving medical data across a TCP/IP network environment. It will

  5. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  6. Solid waste information and tracking system server conversion project management plan

    International Nuclear Information System (INIS)

    MAY, D.L.

    1999-01-01

    The Project Management Plan governing the conversion of Solid Waste Information and Tracking System (SWITS) to a client-server architecture. The Solid Waste Information and Tracking System Project Management Plan (PMP) describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents

  7. Server virtualization solutions

    OpenAIRE

    Jonasts, Gusts

    2012-01-01

    Currently in the information technology sector that is responsible for a server infrastructure is a huge development in the field of server virtualization on x86 computer architecture. As a prerequisite for such a virtualization development is growth in server productivity and underutilization of available computing power. Several companies in the market are working on two virtualization architectures – hypervizor and hosting. In this paper several of virtualization products that use host...

  8. Implementing Citrix XenServer Quickstarter

    CERN Document Server

    Ahmed, Gohar

    2013-01-01

    Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati

  9. UPGRADE FOR HARDWARE/SOFTWARE SERVER AND NETWORK TOPOLOGY IN INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Oleksii O. Kaplun

    2011-02-01

    Full Text Available The network modernization, educational information systems software and hardware updates problem is actual in modern term of information technologies prompt development. There are server applications and network topology of Institute of Information Technology and Learning Tools of National Academy of Pedagogical Sciences of Ukraine analysis and their improvement methods expound in the article. The article materials represent modernization results implemented to increase network efficiency and reliability, decrease response time in Institute’s network information systems. The article gives diagrams of network topology before upgrading and after finish of optimization and upgrading processes.

  10. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    Science.gov (United States)

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  11. Geographic Information Systems-Transportation ISTEA management systems server-net prototype pooled fund study: Phase B summary

    Energy Technology Data Exchange (ETDEWEB)

    Espinoza, J. Jr.; Dean, C.D.; Armstrong, H.M. [and others

    1997-06-01

    The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems as well as the statewide and metropolitan transportation planning requirements of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA). The Study was initiated in November 1993 through the Alliance for Transportation Research and under the leadership of the New Mexico State Highway and Transportation Department. Sandia National Laboratories, an Alliance partner, and Geographic Paradigm Computing. Inc. provided technical leadership for the project. In 1992, the Alliance for Transportation Research, the New Mexico State Highway and Transportation Department, Sandia National Laboratories, and Geographic Paradigm Computing, Inc., proposed a comprehensive research agenda for GIS-T. That program outlined a national effort to synthesize new transportation policy initiatives (e.g., management systems and Intelligent Transportation Systems) with the GIS-T server net ideas contained in the NCHRP project {open_quotes}Adaptation of GIS to Transportation{close_quotes}. After much consultation with state, federal, and private interests, a project proposal based on this agenda was prepared and resulted in this Study. The general objective of the Study was to develop GIS-T server net prototypes supporting the ISTEA requirements for transportation planning and management and monitoring systems. This objective can be further qualified to: (1) Create integrated information system architectures and design requirements encompassing transportation planning activities and data. (2) Encourage the development of functional GIS-T server net prototypes. (3) Demonstrate multiple information systems implemented in a server net environment.

  12. NRSAS: Nuclear Receptor Structure Analysis Servers.

    NARCIS (Netherlands)

    Bettler, E.J.M.; Krause, R.; Horn, F.; Vriend, G.

    2003-01-01

    We present a coherent series of servers that can perform a large number of structure analyses on nuclear hormone receptors. These servers are part of the NucleaRDB project, which provides a powerful information system for nuclear hormone receptors. The computations performed by the servers include

  13. Interim policy on establishment and operation of internet open, anonymous information servers and services

    OpenAIRE

    Acting Dean of Computer and Information Services

    1995-01-01

    Purpose. To establish interim NPS general policy regarding establishment and operation of Open, Anonymous Information Servers and Services, such as World Wide Web (http), Gopher, Anonymous FTP, etc...

  14. SedMob: A mobile application for creating sedimentary logs in the field

    Science.gov (United States)

    Wolniewicz, Pawel

    2014-05-01

    SedMob is an open-source, mobile software package for creating sedimentary logs, targeted for use in tablets and smartphones. The user can create an unlimited number of logs, save data from each bed in the log as well as export and synchronize the data with a remote server. SedMob is designed as a mobile interface to SedLog: a free multiplatform package for drawing graphic logs that runs on PC computers. Data entered into SedMob are saved in the CSV file format, fully compatible with SedLog.

  15. Towards second-generation smart card-based authentication in health information systems: the secure server model.

    Science.gov (United States)

    Hallberg, J; Hallberg, N; Timpka, T

    2001-01-01

    Conventional smart card-based authentication systems used in health care alleviate some of the security issues in user and system authentication. Existing models still do not cover all security aspects. To enable new protective measures to be developed, an extended model of the authentication process is presented. This model includes a new entity referred to as secure server. Assuming a secure server, a method where the smart card is aware of the status of the terminal integrity verification becomes feasible. The card can then act upon this knowledge and restrict the exposure of sensitive information to the terminal as required in order to minimize the risks. The secure server model can be used to illuminate the weaknesses of current approaches and the need for extensions which alleviate the resulting risks.

  16. Designing a scalable video-on-demand server with data sharing

    Science.gov (United States)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  17. Analisis Forensik Jaringan Studi Kasus Serangan SQL Injection pada Server Universitas Gadjah Mada

    Directory of Open Access Journals (Sweden)

    Resi Utami Putri

    2013-07-01

    Abstract Network forensic is a computer security investigation to find the sources of the attacks on the network by examining log evidences, identifying, analyzing and reconstructing the incidents. This research has been conducted at The Center of Information System and Communication Service, Gadjah Mada University. The method that used was The Forensic Process Model, a model of the digital investigation process, consisted of collection, examination, analysis, and reporting. This research has been conducted over five months by retrieving data that was collected from Snort Intrusion Detection System (IDS. Some log files were retrieved and merged into a single log file, and then the data cleaned to fit for research. Based on the research, there are 68 IP address was that did illegal action, SQL injection, on server www.ugm.ac.id. Most of attackers using Havij and SQLmap (automated tools to exploit vulnerabilities on a website. Beside that, there was also Python script that was derived from the continent of Europe in Romania.   Keywords— Network Forensics, The Forensic Process Models, SQL Injection

  18. Information needs for increasing log transport efficiency

    Science.gov (United States)

    Timothy P. McDonald; Steven E. Taylor; Robert B. Rummer; Jorge Valenzuela

    2001-01-01

    Three methods of dispatching trucks to loggers were tested using a log transport simulation model: random allocation, fixed assignment of trucks to loggers, and dispatch based on knowledge of the current status of trucks and loggers within the system. This 'informed' dispatch algorithm attempted to minimize the difference in time between when a logger would...

  19. An Improved Algorithm Research on the PrefixSpan Based on the Server Session Constraint

    Directory of Open Access Journals (Sweden)

    Cai Hong-Guo

    2017-01-01

    Full Text Available When we mine long sequential pattern and discover knowledge by the PrefixSpan algorithm in Web Usage Mining (WUM.The elements and the suffix sequences are much more may cause the problem of the calculation, such as the space explosion. To further solve the problem a more effective way is that. Firstly, a server session-based server log file format is proposed. Then the improved algorithm on the PrefixSpan based on server session constraint is discussed for mining frequent Sequential patterns on the website. Finally, the validity and superiority of the method are presented by the experiment in the paper.

  20. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  1. On Advice Complexity of the k-server Problem under Sparse Metrics

    DEFF Research Database (Denmark)

    Gupta, S.; Kamali, S.; López-Ortiz, A.

    2013-01-01

    O (n(log μ +log logN)) bits of advice. Among other results, this gives a 3-competitive algorithm for planar graphs, provided with O (n log log N) bits of advice. On the other side, we show that an advice of size Ω (n) is required to obtain a 1-competitive algorithm for sequences of size n even......We consider the k-Server problem under the advice model of computation when the underlying metric space is sparse. On one side, we introduce Θ (1)-competitive algorithms for a wide range of sparse graphs, which require advice of (almost) linear size. Namely, we show that for graphs of size N...... and treewidth α, there is an online algorithm which receives O (n(log α +log log N))1 bits of advice and optimally serves a sequence of length n. With a different argument, we show that if a graph admits a system of μ collective tree (q, r)- spanners, then there is a (q + r)-competitive algorithm which receives...

  2. Web Server Configuration for an Academic Intranet

    National Research Council Canada - National Science Library

    Baltzis, Stamatios

    2000-01-01

    .... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...

  3. Server-Aided Verification Signature with Privacy for Mobile Computing

    Directory of Open Access Journals (Sweden)

    Lingling Xu

    2015-01-01

    Full Text Available With the development of wireless technology, much data communication and processing has been conducted in mobile devices with wireless connection. As we know that the mobile devices will always be resource-poor relative to static ones though they will improve in absolute ability, therefore, they cannot process some expensive computational tasks due to the constrained computational resources. According to this problem, server-aided computing has been studied in which the power-constrained mobile devices can outsource some expensive computation to a server with powerful resources in order to reduce their computational load. However, in existing server-aided verification signature schemes, the server can learn some information about the message-signature pair to be verified, which is undesirable especially when the message includes some secret information. In this paper, we mainly study the server-aided verification signatures with privacy in which the message-signature pair to be verified can be protected from the server. Two definitions of privacy for server-aided verification signatures are presented under collusion attacks between the server and the signer. Then based on existing signatures, two concrete server-aided verification signature schemes with privacy are proposed which are both proved secure.

  4. Effect of Temporal Relationships in Associative Rule Mining for Web Log Data

    Science.gov (United States)

    Mohd Khairudin, Nazli; Mustapha, Aida

    2014-01-01

    The advent of web-based applications and services has created such diverse and voluminous web log data stored in web servers, proxy servers, client machines, or organizational databases. This paper attempts to investigate the effect of temporal attribute in relational rule mining for web log data. We incorporated the characteristics of time in the rule mining process and analysed the effect of various temporal parameters. The rules generated from temporal relational rule mining are then compared against the rules generated from the classical rule mining approach such as the Apriori and FP-Growth algorithms. The results showed that by incorporating the temporal attribute via time, the number of rules generated is subsequently smaller but is comparable in terms of quality. PMID:24587757

  5. Microsoft Windows Server 2012 administration instant reference

    CERN Document Server

    Hester, Matthew

    2013-01-01

    Fast, accurate answers for common Windows Server questions Serving as a perfect companion to all Windows Server books, this reference provides you with quick and easily searchable solutions to day-to-day challenges of Microsoft's newest version of Windows Server. Using helpful design features such as thumb tabs, tables of contents, and special heading treatments, this resource boasts a smooth and seamless approach to finding information. Plus, quick-reference tables and lists provide additional on-the-spot answers. Covers such key topics as server roles and functionality, u

  6. Geographic information systems - transportation ISTEA management systems server net prototype pooled fund study : phase B - summary

    Science.gov (United States)

    1997-06-01

    The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems ...

  7. Log Usage Analysis: What it Discloses about Use, Information Seeking and Trustworthiness

    Directory of Open Access Journals (Sweden)

    David Nicholas

    2014-06-01

    Full Text Available The Trust and Authority in Scholarly Communications in the Light of the Digital Transition research project1 was a study which investigated the behaviours and attitudes of academic researchers as producers and consumers of scholarly information resources in respect to how they determine authority and trustworthiness. The research questions for the study arose out of CIBER’s studies of the virtual scholar. This paper focuses on elements of this study, mainly an analysis of a scholarly publisher’s usage logs, which was undertaken at the start of the project in order to build an evidence base, which would help calibrate the main methodological tools used by the project: interviews and questionnaire. The specific purpose of the log study was to identify and assess the digital usage behaviours that potentially raise trustworthiness and authority questions. Results from the self-report part of the study were additionally used to explain the logs. The main findings were that: 1 logs provide a good indicator of use and information seeking behaviour, albeit in respect to just a part of the information seeking journey; 2 the ‘lite’ form of information seeking behaviour observed in the logs is a sign of users trying to make their mind up in the face of a tsunami of information as to what is relevant and to be trusted; 3 Google and Google Scholar are the discovery platforms of choice for academic researchers, which partly points to the fact that they are influenced in what they use and read by ease of access; 4 usage is not a suitable proxy for quality. The paper also provides contextual data from CIBER’s previous studies.

  8. PUMA Internet Task Logging Using the IDAC-1

    Directory of Open Access Journals (Sweden)

    K. N. Tarchanidis

    2014-08-01

    Full Text Available This project uses an IDAC-1 board to sample the joint angle position of the PUMA 76 1 robot and log the results on a computer. The robot is at the task location and the logging computer is located in a different one. The task the robot is performing is based on a Pseudo Stereo Vision System (PSVS. Internet is the transport media. The protocol used in this project is UDP/IP. The actual angle is taken straight from the PUMA controller. High-resolution potentiometers are connected on each robot joint and are buffered and sampled as potential difference on an A/D converter integrated on the IDAC-1. The logging computer through the Internet acting as client asks for the angle set, the IDAC-1 responds as server with the 10-bit resolution sampling of the joint position. The whole task is logged in a file on the logging computer. This application can give the ability to the Internet user to monitor and log the robot tasks anywhere in the Word Wide Web (www.

  9. Using Servers to Enhance Control System Capability

    International Nuclear Information System (INIS)

    Bickley, M.; Bowling, B. A.; Bryan, D. A.; Zeijts, J. van; White, K. S.; Witherspoon, S.

    1999-01-01

    Many traditional control systems include a distributed collection of front end machines to control hardware. Backend tools are used to view, modify, and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. It many cases data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performance by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIXworkstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such servers, and share the results of work performed to date

  10. Joint study on activation of international nuclear information use through implementation INIS DB server with IAEA

    International Nuclear Information System (INIS)

    Lee, H. C.; Yi, J. H.; Kim, T. W.; Chun, Y. C.; Yoo, A. N.

    2003-03-01

    In order to install the INIS DB host site in Korea, the Korea INIS national center has cooperated with KAERI and organizations concerned, contacted the INIS secretariat participated in the consultative meeting of INIS liaison officers, and strengthened the relationship with Asian and Oceanian countries. And KAERI staff and a maintenance engineer participated training seminar on INIS DB installation and maintenance. The Korea national center obtained the INIS DB server code and data by international cooperation activity. Based these code and data, hardware and software for INIS DB server are purchased. INIS DB server system was installed the software and INIS database(2,347,302) was constructed. In 2003 INIS host DB site started to provide web service in Korea. Also It enables the users in the member countries in Asia as well as domestic users to get quick information. It will also bring the active use of the domestic INIS DB and the increase of the productivity of domestic research activities

  11. The HydroServer Platform for Sharing Hydrologic Data

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  12. UNIX secure server : a free, secure, and functional server example

    OpenAIRE

    Sastre, Hugo

    2016-01-01

    The purpose of this thesis work was to introduce UNIX server as a personal server but also as a start point for investigation and developing at a professional level. The objective of this thesis was to build a secure server providing not only a FTP server but also an HTTP server and a cloud system for remote backups. OpenBSD was used as the operating system. OpenBSD is a UNIX-like operating system made by hackers for hackers. The difference with other systems that might partially provid...

  13. Understanding Academic Information Seeking Habits through Analysis of Web Server Log Files: The Case of the Teachers College Library Website

    Science.gov (United States)

    Asunka, Stephen; Chae, Hui Soo; Hughes, Brian; Natriello, Gary

    2009-01-01

    Transaction logs of user activity on an academic library website were analyzed to determine general usage patterns on the website. This paper reports on insights gained from the analysis, and identifies and discusses issues relating to content access, interface design and general functionality of the website. (Contains 13 figures and 8 tables.)

  14. Display graphical information optimization methods in a client-server information system

    Directory of Open Access Journals (Sweden)

    Юрий Викторович Мазуревич

    2015-07-01

    Full Text Available This paper presents an approach to reduce load time and volume of data necessary to display web page due to server side preprocessing. Measurement of this approach’s effectivity has been conducted. There were discovered conditions in which this approach will be the most effective, its disadvantages and presented ways to reduce them

  15. RStrucFam: a web server to associate structure and cognate RNA for RNA-binding proteins from sequence information.

    Science.gov (United States)

    Ghosh, Pritha; Mathew, Oommen K; Sowdhamini, Ramanathan

    2016-10-07

    RNA-binding proteins (RBPs) interact with their cognate RNA(s) to form large biomolecular assemblies. They are versatile in their functionality and are involved in a myriad of processes inside the cell. RBPs with similar structural features and common biological functions are grouped together into families and superfamilies. It will be useful to obtain an early understanding and association of RNA-binding property of sequences of gene products. Here, we report a web server, RStrucFam, to predict the structure, type of cognate RNA(s) and function(s) of proteins, where possible, from mere sequence information. The web server employs Hidden Markov Model scan (hmmscan) to enable association to a back-end database of structural and sequence families. The database (HMMRBP) comprises of 437 HMMs of RBP families of known structure that have been generated using structure-based sequence alignments and 746 sequence-centric RBP family HMMs. The input protein sequence is associated with structural or sequence domain families, if structure or sequence signatures exist. In case of association of the protein with a family of known structures, output features like, multiple structure-based sequence alignment (MSSA) of the query with all others members of that family is provided. Further, cognate RNA partner(s) for that protein, Gene Ontology (GO) annotations, if any and a homology model of the protein can be obtained. The users can also browse through the database for details pertaining to each family, protein or RNA and their related information based on keyword search or RNA motif search. RStrucFam is a web server that exploits structurally conserved features of RBPs, derived from known family members and imprinted in mathematical profiles, to predict putative RBPs from sequence information. Proteins that fail to associate with such structure-centric families are further queried against the sequence-centric RBP family HMMs in the HMMRBP database. Further, all other essential

  16. Solid waste information and tracking system client-server conversion project management plan

    International Nuclear Information System (INIS)

    May, D.L.

    1998-01-01

    This Project Management Plan is the lead planning document governing the proposed conversion of the Solid Waste Information and Tracking System (SWITS) to a client-server architecture. This plan presents the content specified by American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards for software development, with additional information categories deemed to be necessary to describe the conversion fully. This plan is a living document that will be reviewed on a periodic basis and revised when necessary to reflect changes in baseline design concepts and schedules. This PMP describes the background, planning and management of the SWITS conversion. It does not constitute a statement of product requirements. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents

  17. Professional Team Foundation Server 2010

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2011-01-01

    Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides

  18. http Log Analysis

    DEFF Research Database (Denmark)

    Bøving, Kristian Billeskov; Simonsen, Jesper

    2004-01-01

    This article documents how log analysis can inform qualitative studies concerning the usage of web-based information systems (WIS). No prior research has used http log files as data to study collaboration between multiple users in organisational settings. We investigate how to perform http log...... analysis; what http log analysis says about the nature of collaborative WIS use; and how results from http log analysis may support other data collection methods such as surveys, interviews, and observation. The analysis of log files initially lends itself to research designs, which serve to test...... hypotheses using a quantitative methodology. We show that http log analysis can also be valuable in qualitative research such as case studies. The results from http log analysis can be triangulated with other data sources and for example serve as a means of supporting the interpretation of interview data...

  19. Optimizing the Loads of multi-player online game Servers using Markov Chains

    DEFF Research Database (Denmark)

    Saeed, Aamir; Olsen, Rasmus Løvenstein; Pedersen, Jens Myrup

    2015-01-01

    that is created due to the load balancing of servers. Load balancing among servers is sensitive to correct status information. The Markov based load prediction was introduced in this paper to predict load of under-loaded servers, based on arrival (μ) and departure (λ) rates of players. The prediction based...... that need to be considered when developing load balancing algorithm, that is the reliability of the information that is shared. Simulation results show that Markov based prediction of load information performed better from the normal load status information sharing....

  20. Design and Analysis of an Enhanced Patient-Server Mutual Authentication Protocol for Telecare Medical Information System.

    Science.gov (United States)

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S

    2015-11-01

    In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.

  1. Log-Based Recovery in Asynchronous Distributed Systems. Ph.D. Thesis

    Science.gov (United States)

    Kane, Kenneth Paul

    1989-01-01

    A log-based mechanism is described for restoring consistent states to replicated data objects after failures. Preserving a causal form of consistency based on the notion of virtual time is focused upon in this report. Causal consistency has been shown to apply to a variety of applications, including distributed simulation, task decomposition, and mail delivery systems. Several mechanisms have been proposed for implementing causally consistent recovery, most notably those of Strom and Yemini, and Johnson and Zwaenepoel. The mechanism proposed here differs from these in two major respects. First, a roll-forward style of recovery is implemented. A functioning process is never required to roll-back its state in order to achieve consistency with a recovering process. Second, the mechanism does not require any explicit information about the causal dependencies between updates. Instead, all necessary dependency information is inferred from the orders in which updates are logged by the object servers. This basic recovery technique appears to be applicable to forms of consistency other than causal consistency. In particular, it is shown how the recovery technique can be modified to support an atomic form of consistency (grouping consistency). By combining grouping consistency with casual consistency, it may even be possible to implement serializable consistency within this mechanism.

  2. Mfold web server for nucleic acid folding and hybridization prediction.

    Science.gov (United States)

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  3. A distributed design for monitoring, logging, and replaying device readings at LAMPF

    International Nuclear Information System (INIS)

    Burns, M.

    1991-01-01

    As control of the Los Alamos Meson Physics linear accelerator and Proton Storage Ring moves to a more distributed system, it has been necessary to redesign the software which monitors, logs, and replays device readings throughout the facility. The new design allows devices to be monitored and their readings logged locally on a network of computers. Control of the monitoring and logging process is available throughout the network from user interfaces which communicate via remote procedure calls with server processes running on each node which monitors and records device readings. Similarly, the logged data can be replayed from anywhere on the network. Two major requirements influencing the final design were the need to reduce the load on the CPU of the control machines, and the need for much faster replay of the logged device readings. 1 ref., 2 figs

  4. A distributed design for monitoring, logging, and replaying device readings at LAMPF

    International Nuclear Information System (INIS)

    Burns, M.

    1992-01-01

    As control of the Los Alamos Meson Physics linear accelerator and Proton Storage Ring moves to a more distributed system, it has been necessary to redesign the software which monitors, logs, and replays device readings throughout the facility. The new design allows devices to be monitored and their readings logged locally on a network of computers. Control of the monitoring and logging process is available throughout the network from user interfaces which communicate via remote procedure calls with server processes running on each node which monitors and records device readings. Similarly, the logged data can be replayed from anywhere on the network. Two major requirements influencing the final design were the need to reduce the load on the CPU of the control machines, and the need for much faster replay of the logged device readings. (author)

  5. GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.

    Science.gov (United States)

    Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia

    2017-10-01

    Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.

  6. Instrumentation Of The CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance And Diagnostics

    CERN Document Server

    Roderick, C; Dinis Teixeira, D

    2011-01-01

    The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider’s goals should include knowing how the underlying systems are being used, in terms of: “Who is doing what, from where, using which applications and methods, and how long each action takes”. Armed with such information, it is then possible to: analyse and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered s...

  7. Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian

    Science.gov (United States)

    Breeding, Marshall

    2005-01-01

    This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.

  8. GeoServer cookbook

    CERN Document Server

    Iacovella, Stefano

    2014-01-01

    This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.

  9. An Electronic Healthcare Record Server Implemented in PostgreSQL

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2015-01-01

    Full Text Available This paper describes the implementation of an Electronic Healthcare Record server inside a PostgreSQL relational database without dependency on any further middleware infrastructure. The five-part international standard for communicating healthcare records (ISO EN 13606 is used as the information basis for the design of the server. We describe some of the features that this standard demands that are provided by the server, and other areas where assumptions about the durability of communications or the presence of middleware lead to a poor fit. Finally, we discuss the use of the server in two real-world scenarios including a commercial application.

  10. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  11. Comparing two digital consumer health television services using transaction log analysis

    Directory of Open Access Journals (Sweden)

    Paul Huntington

    2002-09-01

    Full Text Available Use is an important characteristic in determining the success or otherwise of any digital information service, and in making comparisons between services. The source of most use data is the server logs that record user activity on a real-time and continuous basis. There is much demand from sponsors, channel owners and marketing departments for this information. The authors evaluate the performance of use metrics, including reach, in order to make comparisons between two services and discuss the methodological problems associated with making such comparisons. The two services were: Living Health, managed by Flextech and distributed by Telewest, and NHS Direct Digital, managed by Communicopia Data and distributed by Kingston Interactive Television. The data were collected over the period August 2001 to February 2002. During this period, the two sites were visited by approximately 20 000 people who recorded more than three-quarters of a million page views.

  12. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  14. Mastering Lync Server 2010

    CERN Document Server

    Winters, Nathan

    2012-01-01

    An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on

  15. LogScope

    Science.gov (United States)

    Havelund, Klaus; Smith, Margaret H.; Barringer, Howard; Groce, Alex

    2012-01-01

    LogScope is a software package for analyzing log files. The intended use is for offline post-processing of such logs, after the execution of the system under test. LogScope can, however, in principle, also be used to monitor systems online during their execution. Logs are checked against requirements formulated as monitors expressed in a rule-based specification language. This language has similarities to a state machine language, but is more expressive, for example, in its handling of data parameters. The specification language is user friendly, simple, and yet expressive enough for many practical scenarios. The LogScope software was initially developed to specifically assist in testing JPL s Mars Science Laboratory (MSL) flight software, but it is very generic in nature and can be applied to any application that produces some form of logging information (which almost any software does).

  16. Migration of the CNA maintenance information system to a client server architecture

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1994-01-01

    The paper explains the guidelines and methodology followed to carry out regulation of the CNA computerized maintenance system (SIGE) to a system with a client/server architecture based on ORACLE. The following guidelines were established to carry out migration: 1 Ensure that the new system would contain all the information of the former system, ie, no information would be lost during migration. 2 Improve the technical design of the application, while maintaining at least the functionality of the former application 3 incorporate modifications into the application which would permit incremental improvement of its functionality. 4 Carry out migration at the minimum cost in time and resources to construct the application, a strict development methodology was followed and certain standards were drawn up to significantly increase the speed. Special use was made of: 1 Data models 2 Process models which operate the data model 3 SQL-FORMS standards 4 Safety features

  17. Zope based electronic operation log system - Zlog

    International Nuclear Information System (INIS)

    Yoshii, K.; Satoh, Y.; Kitabayashi, T.

    2004-01-01

    Since January 2004, the Zope based electronic operation logging system, named Zlog, has been running at the KEKB and AR accelerator facilities. Since Zope is the python based open source web application server software and python language is familiar for the members in the KEKB accelerator control group, we have developed the Zlog system rapidly. In this paper, we report the development history and the present status of Zlog system. Also we show some general plug-in components, called Zope products, have been useful for our Zlog development. (author)

  18. Client Server design and implementation issues in the Accelerator Control System environment

    International Nuclear Information System (INIS)

    Sathe, S.; Hoff, L.; Clifford, T.

    1995-01-01

    In distributed system communication software design, the Client Server model has been widely used. This paper addresses the design and implementation issues of such a model, particularly when used in Accelerator Control Systems. in designing the Client Server model one needs to decide how the services will be defined for a server, what types of messages the server will respond to, which data formats will be used for the network transactions and how the server will be located by the client. Special consideration needs to be given to error handling both on the server and client side. Since the server usually is located on a machine other than the client, easy and informative server diagnostic capability is required. The higher level abstraction provided by the Client Server model simplifies the application writing, however fine control over network parameters is essential to improve the performance. Above mentioned design issues and implementation trade-offs are discussed in this paper

  19. Disk Storage Server

    CERN Multimedia

    This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.

  20. Group-Server Queues

    OpenAIRE

    Li, Quan-Lin; Ma, Jing-Yu; Xie, Mingzhou; Xia, Li

    2017-01-01

    By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times ...

  1. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  2. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  3. Tank Information System (tis): a Case Study in Migrating Web Mapping Application from Flex to Dojo for Arcgis Server and then to Open Source

    Science.gov (United States)

    Pulsani, B. R.

    2017-11-01

    Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  4. Integration of QR codes into an anesthesia information management system for resident case log management.

    Science.gov (United States)

    Avidan, Alexander; Weissman, Charles; Levin, Phillip D

    2015-04-01

    Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Asynchronous data change notification between database server and accelerator control systems

    International Nuclear Information System (INIS)

    Wenge Fu; Seth Nemesure; Morris, J.

    2012-01-01

    Database data change notification (DCN) is a commonly used feature, it allows to be informed when the data has been changed on the server side by another client. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. (authors)

  6. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  7. Design of SIP transformation server for efficient media negotiation

    Science.gov (United States)

    Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee

    2001-07-01

    Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.

  8. Near Real-Time Dissemination of Geo-Referenced Imagery by an Enterprise Server

    National Research Council Canada - National Science Library

    Brown, Alison; Gilbert, Chris; Holland, Heather; Lu, Yan

    2006-01-01

    .... The payload is connected through a data link to a ground-based server that can process the georegistered data in near-real-time using our GeoReferenced Information Manager (GRIM) Enterprise Server...

  9. Drilling, logging, and testing information from borehole UE-25 UZ number-sign 16, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Thamir, F.; Thordarson, W.; Kume, J.; Rousseau, J.; Cunningham, D.M. Jr.

    1998-01-01

    Borehole UE-25 UZ number-sign 16 is the first of two boreholes that may be used to determine the subsurface structure at Yucca Mountain by using vertical seismic profiling. This report contains information collected while this borehole was being drilled, logged, and tested from May 27, 1992, to April 22, 1994. It does not contain the vertical seismic profiling data. This report is intended to be used as: (1) a reference for drilling similar boreholes in the same area, (2) a data source on this borehole, and (3) a reference for other information that is available from this borehole. The reference information includes drilling chronology, equipment, parameters, coring methods, penetration rates, completion information, drilling problems, and corrective actions. The data sources include lithology, fracture logs, a list of available borehole logs, and depths at which water was recorded. Other information is listed in an appendix that includes studies done after April 22, 1994

  10. DECENTRALIZED SOCIAL NETWORK SERVICE USING THE WEB HOSTING SERVER FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    Yoonho Nam

    2013-10-01

    Full Text Available In recent years, the number of subscribers of the social network services such as Facebook and Twitter has increased rapidly. In accordance with the increasing popularity of social network services, concerns about user privacy are also growing. Existing social network services have a centralized structure that a service provider collects all the user’s profile and logs until the end of the connection. The information collected typically useful for commercial purposes, but may lead to a serious user privacy violation. The user’s profile can be compromised for malicious purposes, and even may be a tool of surveillance extremely. In this paper, we remove a centralized structure to prevent the service provider from collecting all users’ information indiscriminately, and present a decentralized structure using the web hosting server. The service provider provides only the service applications to web hosting companies, and the user should select a web hosting company that he trusts. Thus, the user’s information is distributed, and the user’s privacy is guaranteed from the service provider.

  11. Characteristics and Energy Use of Volume Servers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shehabi, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ganeshalingam, M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, L. -B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lim, B. [Fraunhofer Center for Sustainable Energy Systems, Boston, MA (United States); Roth, K. [Fraunhofer Center for Sustainable Energy Systems, Boston, MA (United States); Tsao, A. [Navigant Consulting Inc., Chicago, IL (United States)

    2017-11-01

    Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website. We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.

  12. Professional SQL Server 2005 administration

    CERN Document Server

    Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong

    2007-01-01

    SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin

  13. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  14. SDSS Log Viewer: visual exploratory analysis of large-volume SQL log data

    Science.gov (United States)

    Zhang, Jian; Chen, Chaomei; Vogeley, Michael S.; Pan, Danny; Thakar, Ani; Raddick, Jordan

    2012-01-01

    User-generated Structured Query Language (SQL) queries are a rich source of information for database analysts, information scientists, and the end users of databases. In this study a group of scientists in astronomy and computer and information scientists work together to analyze a large volume of SQL log data generated by users of the Sloan Digital Sky Survey (SDSS) data archive in order to better understand users' data seeking behavior. While statistical analysis of such logs is useful at aggregated levels, efficiently exploring specific patterns of queries is often a challenging task due to the typically large volume of the data, multivariate features, and data requirements specified in SQL queries. To enable and facilitate effective and efficient exploration of the SDSS log data, we designed an interactive visualization tool, called the SDSS Log Viewer, which integrates time series visualization, text visualization, and dynamic query techniques. We describe two analysis scenarios of visual exploration of SDSS log data, including understanding unusually high daily query traffic and modeling the types of data seeking behaviors of massive query generators. The two scenarios demonstrate that the SDSS Log Viewer provides a novel and potentially valuable approach to support these targeted tasks.

  15. Selection of Server-Side Technologies for an E-Business Curriculum

    Science.gov (United States)

    Sandvig, J. Christopher

    2007-01-01

    The rapid growth of e-business and e-commerce has made server-side programming an increasingly important topic in information systems (IS) and computer science (CS) curricula. This article presents an overview of the major features of several popular server-side programming technologies and discusses the factors that influence the selection of…

  16. Saving Money and Time with Virtual Server

    CERN Document Server

    Sanders, Chris

    2006-01-01

    Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of new users are able to experience what the power of virtualization can do for their networks. This guide is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity. It contains information on setting up a virtual network, virtual consolidation, virtual security, virtual honeypo

  17. Log quality enhancement: A systematic assessment of logging company wellsite performance and log quality

    International Nuclear Information System (INIS)

    Farnan, R.A.; Mc Hattie, C.M.

    1984-01-01

    To improve the monitoring of logging company performance, computer programs were developed to assess information en masse from log quality check lists completed on wellsite by the service company engineer and Phillips representative. A study of all logging jobs performed by different service companies for Phillips in Oklahoma (panhandle excepted) during 1982 enabled several pertinent and beneficial interpretations to be made. Company A provided the best tool and crew service. Company B incurred an excessive amount of lost time related to tool failure, in particular the neutron-density tool combination. Company C, although used only three times, incurred no lost time. With a reasonable data base valid conclusions were made pertaining, for example, to repeated tool malfunctions. The actual logs were then assessed for quality

  18. Improved materials management through client/server computing

    International Nuclear Information System (INIS)

    Brooks, D.; Neilsen, E.; Reagan, R.; Simmons, D.

    1992-01-01

    This paper reports that materials management and procurement impacts every organization within an electric utility from power generation to customer service. An efficient material management and procurement system can help improve productivity and minimize operating costs. It is no longer sufficient to simply automate materials management using inventory control systems. Smart companies are building centralized data warehouses and use the client/server style of computing to provide real time data access. This paper describes how Alabama Power Company, Southern Company Services and Digital Equipment Corporation transformed two existing applications, a purchase order application within DEC's ALL-IN-1 environment and a materials management application within an IBM CICS environment, into a data warehouse - client/server application. An application server is used to overcome incompatibilities between computing environments and provide easy, real-time access to information residing in multi-vendor environments

  19. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    Science.gov (United States)

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. TANK INFORMATION SYSTEM (TIS: A CASE STUDY IN MIGRATING WEB MAPPING APPLICATION FROM FLEX TO DOJO FOR ARCGIS SERVER AND THEN TO OPEN SOURCE

    Directory of Open Access Journals (Sweden)

    B. R. Pulsani

    2017-11-01

    Full Text Available Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  1. Solid Waste Information and Tracking System Server Conversion Project Management Plan

    International Nuclear Information System (INIS)

    GLASSCOCK, J.A.

    2000-01-01

    The Project Management Plan governing the conversion of SWITS to a client-server architecture. The PMP describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion

  2. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  3. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  4. Optimal control of a server farm

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Kulkarni, V.G.; Wijk, van A.C.C.

    2013-01-01

    We consider a server farm consisting of ample exponential servers, that serve a Poisson stream of arriving customers. Each server can be either busy, idle or off. An arriving customer will immediately occupy an idle server, if there is one, and otherwise, an off server will be turned on and start

  5. NEOS Server 4.0 Administrative Guide

    OpenAIRE

    Dolan, Elizabeth D.

    2001-01-01

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver adminis...

  6. Microsoft SQL Server 2012 bible

    CERN Document Server

    Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron

    2012-01-01

    Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p

  7. Windows Home Server users guide

    CERN Document Server

    Edney, Andrew

    2008-01-01

    Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist

  8. Interpretation of horizontal well production logs: influence of logging tool

    Energy Technology Data Exchange (ETDEWEB)

    Ozkan, E. [Colorado School of Mines, Boulder, CO (United States); Sarica, C. [Pennsylvania State Univ., College Park, PA (United States); Haci, M. [Drilling Measurements, Inc (United States)

    1998-12-31

    The influence of a production-logging tool on wellbore flow rate and pressure measurements was investigated, focusing on the disturbence caused by the production-logging tool and the coiled tubing on the original flow conditions in the wellbore. The investigation was carried out using an analytical model and single-phase liquid flow was assumed. Results showed that the production-logging tool influenced the measurements as shown by the deviation of the original flow-rate, pressure profiles and low-conductivity wellbores. High production rates increase the effect of the production-logging tool. Recovering or inferring the original flow conditions in the wellbore from the production-logging data is a very complex process which cannot be solved easily. For this reason, the conditions under which the information obtained by production-logging is meaningful is of considerable practical interest. 7 refs., 2 tabs., 15 figs.

  9. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  10. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  11. Identifying APT Malware Domain Based on Mobile DNS Logging

    Directory of Open Access Journals (Sweden)

    Weina Niu

    2017-01-01

    Full Text Available Advanced Persistent Threat (APT is a serious threat against sensitive information. Current detection approaches are time-consuming since they detect APT attack by in-depth analysis of massive amounts of data after data breaches. Specifically, APT attackers make use of DNS to locate their command and control (C&C servers and victims’ machines. In this paper, we propose an efficient approach to detect APT malware C&C domain with high accuracy by analyzing DNS logs. We first extract 15 features from DNS logs of mobile devices. According to Alexa ranking and the VirusTotal’s judgement result, we give each domain a score. Then, we select the most normal domains by the score metric. Finally, we utilize our anomaly detection algorithm, called Global Abnormal Forest (GAF, to identify malware C&C domains. We conduct a performance analysis to demonstrate that our approach is more efficient than other existing works in terms of calculation efficiency and recognition accuracy. Compared with Local Outlier Factor (LOF, k-Nearest Neighbor (KNN, and Isolation Forest (iForest, our approach obtains more than 99% F-M and R for the detection of C&C domains. Our approach not only can reduce data volume that needs to be recorded and analyzed but also can be applicable to unsupervised learning.

  12. Learning SQL Server Reporting Services 2012

    CERN Document Server

    Krishnaswamy, Jayaram

    2013-01-01

    The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.

  13. Client/Server Architecture Promises Radical Changes.

    Science.gov (United States)

    Freeman, Grey; York, Jerry

    1991-01-01

    This article discusses the emergence of the client/server paradigm for the delivery of computer applications, its emergence in response to the proliferation of microcomputers and local area networks, the applicability of the model in academic institutions, and its implications for college campus information technology organizations. (Author/DB)

  14. Beginning Microsoft SQL Server 2012 Programming

    CERN Document Server

    Atkinson, Paul

    2012-01-01

    Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho

  15. The SAMGrid database server component: its upgraded infrastructure and future development path

    International Nuclear Information System (INIS)

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.

    2004-01-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema

  16. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  17. Log4J

    CERN Document Server

    Perry, Steven

    2009-01-01

    Log4j has been around for a while now, and it seems like so many applications use it. I've used it in my applications for years now, and I'll bet you have too. But every time I need to do something with log4j I've never done before I find myself searching for examples of how to do whatever that is, and I don't usually have much luck. I believe the reason for this is that there is a not a great deal of useful information about log4j, either in print or on the Internet. The information is too simple to be of real-world use, too complicated to be distilled quickly (which is what most developers

  18. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft

  19. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  20. Exam 70-411 administering Windows Server 2012

    CERN Document Server

    Course, Microsoft Official Academic

    2014-01-01

    Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of  a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program

  1. Advanced server virtualization VMware and Microsoft platforms in the virtual data center

    CERN Document Server

    Marshall, David; McCrory, Dave

    2006-01-01

    Executives of IT organizations are compelled to quickly implement server virtualization solutions because of significant cost savings. However, most IT professionals tasked with deploying virtualization solutions have little or no experience with the technology. This creates a high demand for information on virtualization and how to properly implement it in a datacenter. Advanced Server Virtualization: VMware® and Microsoft® Platforms in the Virtual Data Center focuses on the core knowledge needed to evaluate, implement, and maintain an environment that is using server virtualization. This boo

  2. Design and implementation of an enterprise information system utilizing a component based three-tier client/server database system

    OpenAIRE

    Akbay, Murat.; Lewis, Steven C.

    1999-01-01

    The Naval Security Group currently requires a modem architecture to merge existing command databases into a single Enterprise Information System through which each command may manipulate administrative data. There are numerous technologies available to build and implement such a system. Component- based architectures are extremely well-suited for creating scalable and flexible three-tier Client/Server systems because the data and business logic are encapsulated within objects, allowing them t...

  3. Mac OS X Lion Server For Dummies

    CERN Document Server

    Rizzo, John

    2011-01-01

    The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ

  4. Learning Zimbra Server essentials

    CERN Document Server

    Kouka, Abdelmonam

    2013-01-01

    A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.

  5. Map server of Slovak Environmental Agency

    International Nuclear Information System (INIS)

    Koska, M.

    2005-01-01

    The Slovak Environmental Agency (SAZP) is professional organization of the Ministry of Environment of the Slovak Republic. In the area of informatics SAZP is responsible for operation of information system about environment in the Slovak Republic (ISE). The main goal of the ISE is collection, evaluating and accessing of relevant information about environment between organizations of state or administration, public administration, public, scientific institutes etc. SAZP uses technology of publishing of geo-space data so-called WEB maps (dynamic mapping) - maps are formed online. As a technologic part of information system is internet map server

  6. Microsoft Windows Server Administration Essentials

    CERN Document Server

    Carpenter, Tom

    2011-01-01

    The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t

  7. GeoServer: il server geospaziale Open Source novità della nuova versione 2.3.0

    Directory of Open Access Journals (Sweden)

    Simone Giannecchini

    2013-04-01

    Full Text Available GeoServer è un server geospaziale Open Source sviluppato con tecnologia Java Enterprise per la gestione e l’editing di dati geospaziali secondo gli standard OGC e ISO Technical Committee 211. Esso fornisce le funzionalità di base per creareinfrastrutture spaziali di dati (SDI ed è progettato per essere interoperabile potendo pubblicare dati provenienti da ogni tipo di fonte spaziale utilizzando standard aperti.Open Source GeoSpatial server developed with Java Enterprise technology for managing, sharing and editing geospatial data according to the OGC and ISO TC 211 standards. GeoServer provides the basic functionalities to create spatial data infrastructures (SDI.GeoServer is designed for interoperability, it publishes data from any major spatial data source using open standards: it is the reference implementation of the Open Geospatial Consortium (OGC Web Feature Service (WFS and Web Coverage Service (WCS standards, as well as a highperformance certified compliant Web Map Service (WMS. GeoServer forms a core component of the Geospatial Web.

  8. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    Science.gov (United States)

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  9. Professional Microsoft SQL Server 2012 Administration

    CERN Document Server

    Jorgensen, Adam; LoForte, Ross; Knight, Brian

    2012-01-01

    An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s

  10. Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server

    Science.gov (United States)

    2016-09-01

    tool has been developed for many platforms: Android , iOS, and Windows. The Windows version has been developed as a web server that allows the...Microsoft Windows. 15. SUBJECT TERMS Applied Anomaly Detection Tool, AADT, Windows, server, web service, installation 16. SECURITY CLASSIFICATION OF: 17...instructional information about identifying them as groups and individually. The software has been developed for several different platforms: Android

  11. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    Science.gov (United States)

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  12. Applying Big Data solutions for log analytics in the PanDA infrastructure

    CERN Document Server

    Alekseev, Aleksandr; The ATLAS collaboration

    2017-01-01

    PanDA is the workflow management system of the ATLAS experiment at the LHC and is responsible for generating, brokering and monitoring up to two million jobs per day across 150 computing centers in the Worldwide LHC Computing Grid. The PanDA core consists of several components deployed centrally on around 20 servers. The daily log volume is around 400GB per day. In certain cases, troubleshooting a particular issue on the raw log files can be compared to searching for a needle in a haystack and requires a high level of expertise. Therefore we decided to build on trending Big Data solutions and utilize the ELK infrastructure (Filebeat, Logstash, Elastic Search and Kibana) to process, index and analyze our log files. This allows to overcome troubleshooting complexity, provides a better interface to the operations team and generates advanced analytics to understand our system. This paper will describe the features of the ELK stack, our infrastructure, optimal configuration settings and filters. We will provide ex...

  13. SciServer Compute brings Analysis to Big Data in the Cloud

    Science.gov (United States)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  14. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  15. LHCb Online Log Analysis and Maintenance System

    CERN Document Server

    Garnier, J-C

    2011-01-01

    History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to the huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may be overlooked or simply be drowned in a sea of other messages. This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, Offline analysis and online analysis. We call Offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. ...

  16. Network characteristics for server selection in online games

    Science.gov (United States)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  17. Exchange Server 2010 Administration Real World Skills for MCITP Certification and Beyond (Exams 70-662 and 70-663)

    CERN Document Server

    Stidley, Joel

    2010-01-01

    A soup-to-nuts guide for messaging administrators. Exchange Server is the world's leading e-mail server software. Windows 7 and Server 2008 R2 have made changes that messaging administrators need to know and understand in their daily work with Exchange Server. This Sybex guide focuses on the skills, concepts, technologies, and potential pitfalls that admins in the trenches need to understand. It also provides the information they need to earn MCITP certification.: Updates in Exchange Server, the world's leading e-mail server software, require messaging administrators to update their knowledge

  18. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    Science.gov (United States)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  19. A Capacity Supply Model for Virtualized Servers

    Directory of Open Access Journals (Sweden)

    Alexander PINNOW

    2009-01-01

    Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.

  20. Pengaruh Perangkat Server Terhadap Kualitas Pengontrolan Jarak Jauh Melalui Internet

    OpenAIRE

    Gunawan; R, Imam Muslim

    2017-01-01

    Internet greatly assist people in improving their quality of life. Almost all areas of human life can be accessed using the internet. Human aided by the internet that provides all sorts of information that they need. Along with the development of the Internet network infrastructure remotely control began to change using the internet. In this study using notebooks and servers Raspberry Pi to find out the quality control of each device server used. In this study we investigate the possibility o...

  1. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  2. Windows Server 2012 R2 administrator cookbook

    CERN Document Server

    Krause, Jordan

    2015-01-01

    This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.

  3. Manipulating E-Mail Server Feedback for Spam Prevention

    Directory of Open Access Journals (Sweden)

    O. A. Okunade

    2017-08-01

    Full Text Available The cyber criminals who infect machines with bots are not the same as the spammers who rent botnets to distribute their messages. The activities of these spammers account for the majority of spam emails traffic on the internet. Once their botnets and campaigns are identified, it is not enough to keep on filtering the spam emails, it is necessary to deploy techniques that will carry the fight to their end. It is observed that spammers also take into account server feedback (for example to detect and remove non-existent recipients from email address lists. We can take advantage of this observation by returning fake information, thereby poisoning the server feedback on which the spammers rely. The results of this paper show that by sending misleading information to a spammer, it is possible to prevent recipients from receiving subsequent spam emails from that same spammer.

  4. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  5. Mastering Windows Server 2008 Networking Foundations

    CERN Document Server

    Minasi, Mark; Mueller, John Paul

    2011-01-01

    Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co

  6. National Medical Terminology Server in Korea

    Science.gov (United States)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  7. BEAM web server: a tool for structural RNA motif discovery.

    Science.gov (United States)

    Pietrosanto, Marco; Adinolfi, Marta; Casula, Riccardo; Ausiello, Gabriele; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2018-03-15

    RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. marco.pietrosanto@uniroma2.it. Supplementary data are available at Bioinformatics online.

  8. Artificial intelligence approach to interwell log correlation

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Jong-Se [Korea Maritime University, Pusan(Korea); Kang, Joo Myung [Seoul National University, Seoul(Korea); Kim, Jung Whan [Korea National Oil Corp., Anyang(Korea)

    2000-04-30

    This paper describes a new approach to automated interwell log correlation using artificial intelligence and principal component analysis. The approach to correlate wire line logging data is on the basis of a large set of subjective rules that are intended to represent human logical processes. The data processed are mainly the qualitative information such as the characteristics of the shapes extracted along log traces. The apparent geologic zones are identified by pattern recognition for the specific characteristics of log trace collected as a set of objects by object oriented programming. The correlation of zones between wells is made by rule-based inference program. The reliable correlation can be established from the first principal component logs derived from both the important information around well bore and the largest common part of variances of all available well log data. Correlation with field log data shows that this approach can make interwell log correlation more reliable and accurate. (author). 6 refs., 7 figs.

  9. PERANCANGAN MAIL SERVER ZIMBRA MENGGUNAKAN TEKNOLOGI VIRTUALISASI STUDI KASUS : SMK PANCAKARYA KOTA TANGERANG

    Directory of Open Access Journals (Sweden)

    Heru Prasetiawan

    2017-05-01

    Full Text Available The development of information technology is growing rapidly spur the emergence of new technologies are constantly evolving. The development of technologies that generate more reliable, efficient, economical, and powerful than previous technology. Electronic mail (email is a form of communication and correspondence electronically through a computer system and transmitted to another computer that is intended to traverse the computer network. The existence of mail server is needed to support the communication needs via email. Zimbra Mail Server is implemented using virtualization technology with the operating system Proxmox which is a Linux distribution based on Debian and to guestnya operating system SLES (Suse Linux Enterprise Server. This research was conducted at the agency already has a previous computer networking facilities, so that the research was conducted to complement the needs of the mail server at the institution. The result achieved is a mail application server using virtualization technology that has the facilities and the web-based mail client applications, antivirus and antispam.

  10. Microsoft® Office Communications Server 2007 R2 Resource Kit

    CERN Document Server

    Maximo, Rui; Ramanathan, Rajesh; Kamdar, Nirav

    2009-01-01

    In-depth, comprehensive, and fully revised for R2-this RESOURCE KIT delivers the information you need to deploy, manage, and troubleshoot Microsoft Office Communications Server 2007 R2. Get technical insights, scenarios, and best practices from those who know the technology best-the engineers who designed and developed it-along with 90+ Windows PowerShell™ scripts, bonus references, and other essential resources on CD. Get expert advice on how to: Plan server roles, infrastructure, topology, and securityDesign and manage enterprise instant messaging (IM), presence, and conferencing solutio

  11. Robust client/server shared state interactions of collaborative process with system crash and network failures

    NARCIS (Netherlands)

    Wang, Lei; Wombacher, Andreas; Ferreira Pires, Luis; van Sinderen, Marten J.; Chi, Chihung

    With the possibility of system crashes and network failures, the design of robust client/server interactions for collaborative process execution is a challenge. If a business process changes state, it sends messages to relevant processes to inform about this change. However, server crashes and

  12. Secure data aggregation in heterogeneous and disparate networks using stand off server architecture

    Science.gov (United States)

    Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.

    2009-04-01

    The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.

  13. NExT server

    CERN Document Server

    1989-01-01

    The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.

  14. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    Science.gov (United States)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  15. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  16. TwiddleNet: Smartphones as Personal Servers

    OpenAIRE

    Gurminder, Singh; Center for the Study of Mobile Devices and Communications

    2012-01-01

    TwiddleNet uses smartphones as personal servers to enable instant content capture and dissemination for firstresponders. It supports the information sharing needs of first responders in the early stages of an emergency response operation. In TwiddleNet, content, once captured, is automatically tagged and disseminated using one of the several networking channels available in smartphones. TwiddleNet pays special attention to minimizing the equipment, network set-up time, and content...

  17. Personalized Pseudonyms for Servers in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiao Qiuyu

    2017-10-01

    Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.

  18. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    Science.gov (United States)

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Mastering Microsoft Windows Server 2008 R2

    CERN Document Server

    Minasi, Mark; Finn, Aidan

    2010-01-01

    The one book you absolutely need to get up and running with Windows Server 2008 R2. One of the world's leading Windows authorities and top-selling author Mark Minasi explores every nook and cranny of the latest version of Microsoft's flagship network operating system, Windows Server 2008 R2, giving you the most in-depth coverage in any book on the market.: Focuses on Windows Windows Server 2008 R2, the newest version of Microsoft's Windows' server line of operating system, and the ideal server for new Windows 7 clients; Author Mark Minasi is one of the world's leading Windows authorities and h

  20. Mastering Windows Server 2012 R2

    CERN Document Server

    Minasi, Mark; Booth, Christian; Butler, Robert; McCabe, John; Panek, Robert; Rice, Michael; Roth, Stefan

    2013-01-01

    Check out the new Hyper-V, find new and easier ways to remotely connect back into the office, or learn all about Storage Spaces-these are just a few of the features in Windows Server 2012 R2 that are explained in this updated edition from Windows authority Mark Minasi and a team of Windows Server experts led by Kevin Greene. This book gets you up to speed on all of the new features and functions of Windows Server, and includes real-world scenarios to put them in perspective. If you're a system administrator upgrading to, migrating to, or managing Windows Server 2012 R2, find what you need to

  1. Compulsory registration of mail servers in order to accept e-mail from the Internet

    CERN Multimedia

    IT Department

    2008-01-01

    This announcement is intended only for administrators of a mail server (sendmail, postfix, etc.). It concerns e-mails sent from the Internet to addresses of the following type: somemailbox@somehost.cern.ch. Mail server managers are requested to register their servers so that they can accept e-mail from outside CERN. In future the CERN mail infrastructure will relay messages from outside CERN only to officially registered mail servers. This rule applies only to messages sent from the Internet. There will be NO change with respect to messages sent from inside CERN. If you are responsible for a mail server that accepts e-mails from outside CERN, please read the following page: http://cern.ch/mail/Help/?kbid=191090, where you can find information about the new rule and check if your host is already registered in the system. If you wish to register a mail server please send a message to: mailto:mail.support@cern.ch. This rule will be gradually enforced from 20 February 2008 onwards. Thank you for your cooperation...

  2. National logging program for the National Uranium Resource Evaluation. Final report

    International Nuclear Information System (INIS)

    The Mineral Engineering Division (MED) of High Life Helicopters, Inc., operated from May, 1979, through August, 1981, as a subcontractor to the Department of Energy (DOE) to acquire downhole geophysical log information in support of the National Uranium Resource Evaluation program (NURE). MED acquired downhole geophysical log information in 26 1 0 x 2 0 NTMS quadrangles in Colorado, Montana, Nebraska, North Dakota, South Dakota, and Wyoming. MED obtained the log information by gaining permission to log oil and gas wells, water wells, and coal exploration holes. Actual geophysical logging was subcontracted to Century Geophysical Corporation. After logging of each well and completed, MED submitted the log information and other pertinent data to Bendix Field Engineering Corporation (BFEC) for evaluation. MED collected over 700,000 feet of geophysical logs. Additionally, MED conducted a search of log libraries for existing log data for twelve of the quadrangles included in the program. It should be noted that ERTEC, Inc. conducted geophysical logging and a log library search to five quadrangles in Wyoming. These areas were later assigned to MED. The location of all wells logged by MED and ERTEC and the location of other log data is shown on the enclosed maps. Detailed information that pertains to each well is provided following each map

  3. An adversarial queueing model for online server routing

    NARCIS (Netherlands)

    Bonifaci, V.

    2007-01-01

    In an online server routing problem, a vehicle or server moves in a network in order to process incoming requests at the nodes. Online server routing problems have been thoroughly studied using competitive analysis. We propose a new model for online server routing, based on adversarial queueing

  4. Distributed control system for demand response by servers

    Science.gov (United States)

    Hall, Joseph Edward

    Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.

  5. RaptorX-Property: a web server for protein structure property prediction.

    Science.gov (United States)

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-08

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Method for a dummy CD mirror server based on NAS

    Science.gov (United States)

    Tang, Muna; Pei, Jing

    2002-09-01

    With the development of computer network, information sharing is becoming the necessity in human life. The rapid development of CD-ROM and CD-ROM driver techniques makes it possible to issue large database online. After comparing many designs of dummy CD mirror database, which are the embodiment of a main product in CD-ROM database now and in near future, we proposed and realized a new PC based scheme. Our system has the following merits, such as, supporting all kinds of CD format; supporting many network protocol; the independence of mirror network server and the main server; low price, super large capacity, without the need of any special hardware. Preliminarily experiments have verified the validity of the proposed scheme. Encouraged by the promising application future, we are now preparing to put it into market. This paper discusses the design and implement of the CD-ROM server detailedly.

  7. A tandem queue with delayed server release

    OpenAIRE

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he is blocking his server in station 1 until he completes service in station 2, whereupon the server in station 1 is released. An analysis of the generating function of the simultaneous probability di...

  8. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  9. JAFA: a protein function annotation meta-server

    DEFF Research Database (Denmark)

    Friedberg, Iddo; Harder, Tim; Godzik, Adam

    2006-01-01

    Annotations, or JAFA server. JAFA queries several function prediction servers with a protein sequence and assembles the returned predictions in a legible, non-redundant format. In this manner, JAFA combines the predictions of several servers to provide a comprehensive view of what are the predicted functions...

  10. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves

  11. Passive Detection of Misbehaving Name Servers

    Science.gov (United States)

    2013-10-01

    name servers that changed IP address five or more times in a month. Solid red line indicates those servers possibly linked to pharmaceutical scams . 12...malicious and stated that fast-flux hosting “is considered one of the most serious threats to online activities today” [ICANN 2008, p. 2]. The...that time, apparently independent of filters on name-server flux, a large number of pharmaceutical scams1 were taken down. These scams apparently

  12. The new protein topology graph library web server.

    Science.gov (United States)

    Schäfer, Tim; Scheck, Andreas; Bruneß, Daniel; May, Patrick; Koch, Ina

    2016-02-01

    We present a new, extended version of the Protein Topology Graph Library web server. The Protein Topology Graph Library describes the protein topology on the super-secondary structure level. It allows to compute and visualize protein ligand graphs and search for protein structural motifs. The new server features additional information on ligand binding to secondary structure elements, increased usability and an application programming interface (API) to retrieve data, allowing for an automated analysis of protein topology. The Protein Topology Graph Library server is freely available on the web at http://ptgl.uni-frankfurt.de. The website is implemented in PHP, JavaScript, PostgreSQL and Apache. It is supported by all major browsers. The VPLG software that was used to compute the protein ligand graphs and all other data in the database is available under the GNU public license 2.0 from http://vplg.sourceforge.net. tim.schaefer@bioinformatik.uni-frankfurt.de; ina.koch@bioinformatik.uni-frankfurt.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Face logging in Copenhagen Limestone, Denmark

    DEFF Research Database (Denmark)

    Jakobsen, Lisa; Foged, Niels Nielsen; Erichsen, Lars

    2015-01-01

    tunnel in Copenhagen more than 2.5 km face logs were made in 467 locations at underground stations, shafts, caverns and along bored tunnels. Over 160 geotechnical boreholes, many with geophysical logging were executed prior to construction works. The bedrock consists of Paleogene "Copenhagen limestone......The requirement for excavation support can be assessed from face logging. Face logs can also improve our knowledge of lithological and structural conditions within bedrock and supplement information from boreholes and geophysical logs. During the construction of 8 km metro tunnel and 4 km heating....... The induration degrees recorded in face logs and boreholes are compared and correlated. Distinct geophysical log markers are used to divide the limestone into three units. These marker horizons are correlated between face logs and geotechnical boreholes. A 3D model of the strength variations recorded within...

  14. Mastering Microsoft Windows Small Business Server 2008

    CERN Document Server

    Johnson, Steven

    2010-01-01

    A complete, winning approach to the number one small business solution. Do you have 75 or fewer users or devices on your small-business network? Find out how to integrate everything you need for your mini-enterprise with Microsoft's new Windows Server 2008 Small Business Server, a custom collection of server and management technologies designed to help small operations run smoothly without a giant IT department. This comprehensive guide shows you how to master all SBS components as well as handle integration with other Microsoft technologies.: Focuses on Windows Server 2008 Small Business Serv

  15. Server-side Statistics Scripting in PHP

    Directory of Open Access Journals (Sweden)

    Jan de Leeuw

    1997-06-01

    Full Text Available On the UCLA Statistics WWW server there are a large number of demos and calculators that can be used in statistics teaching and research. Some of these demos require substantial amounts of computation, others mainly use graphics. These calculators and demos are implemented in various different ways, reflecting developments in WWW based computing. As usual, one of the main choices is between doing the work on the client-side (i.e. in the browser or on the server-side (i.e. on our WWW server. Obviously, client-side computation puts fewer demands on the server. On the other hand, it requires that the client downloads Java applets, or installs plugins and/or helpers. If JavaScript is used, client-side computations will generally be slow. We also have to assume that the client is installed properly, and has the required capabilities. Requiring too much on the client-side has caused browsing machines such as Netscape Communicator to grow beyond all reasonable bounds, both in size and RAM requirements. Moreover requiring Java and JavaScript rules out such excellent browsers as Lynx or Emacs W3. For server-side computing, we can configure the server and its resources ourselves, and we need not worry about browser capabilities and configuration. Nothing needs to be downloaded, except the usual HTML pages and graphics. In the same way as on the client side, there is a scripting solution, where code is interpreted, or a ob ject-code solution using compiled code. For the server-side scripting, we use embedded languages, such as PHP/FI. The scripts in the HTML pages are interpreted by a CGI program, and the output of the CGI program is send to the clients. Of course the CGI program is compiled, but the statistics procedures will usually be interpreted, because PHP/FI does not have the appropriate functions in its scripting language. This will tend to be slow, because embedded languages do not deal efficiently with loops and similar constructs. Thus a first

  16. Switching of servers in small and medium-sized companies

    Energy Technology Data Exchange (ETDEWEB)

    Huser, A.

    2001-07-01

    This report for the Swiss Federal Office of Energy (SFOE) looks at the feasibility of switching off electronic data processing servers during periods of none-use, such as during the night or over the weekend. The results of a representative survey made in the German-speaking part of Switzerland are presented that have shown that acceptance is high if fully-developed and reliable systems are available. The feasibility of switching off servers, which has been demonstrated in four pilot installations, is discussed. In these projects, the goal was to cut operating times of central network components such as servers, printers etc. Measures taken to inform users of planned switch-off times and the possibilities they are given to change or override these are discussed. Other advantages to be gained from controlled, automatic shut-down of these components such as better reliability and security, time-savings for administrators and energy savings of more than 50% are discussed. The report recommends that further pilot projects be carried out with the ultimate goal of integrating the functions in commercial products.

  17. Triple-server blind quantum computation using entanglement swapping

    Science.gov (United States)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  18. The pepATTRACT web server for blind, large-scale peptide-protein docking.

    Science.gov (United States)

    de Vries, Sjoerd J; Rey, Julien; Schindler, Christina E M; Zacharias, Martin; Tuffery, Pierre

    2017-07-03

    Peptide-protein interactions are ubiquitous in the cell and form an important part of the interactome. Computational docking methods can complement experimental characterization of these complexes, but current protocols are not applicable on the proteome scale. pepATTRACT is a novel docking protocol that is fully blind, i.e. it does not require any information about the binding site. In various stages of its development, pepATTRACT has participated in CAPRI, making successful predictions for five out of seven protein-peptide targets. Its performance is similar or better than state-of-the-art local docking protocols that do require binding site information. Here we present a novel web server that carries out the rigid-body stage of pepATTRACT. On the peptiDB benchmark, the web server generates a correct model in the top 50 in 34% of the cases. Compared to the full pepATTRACT protocol, this leads to some loss of performance, but the computation time is reduced from ∼18 h to ∼10 min. Combined with the fact that it is fully blind, this makes the web server well-suited for large-scale in silico protein-peptide docking experiments. The rigid-body pepATTRACT server is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/services/pepATTRACT. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Mariners Weather Log

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Mariners Weather Log (MWL) is a publication containing articles, news and information about marine weather events and phenomena, worldwide environmental impact...

  20. A semantic perspective on query log analysis

    NARCIS (Netherlands)

    Hofmann, K.; de Rijke, M.; Huurnink, B.; Meij, E.

    2009-01-01

    We present our views on the CLEF log file analysis task. We argue for a task definition that focuses on the semantic enrichment of query logs. In addition, we discuss how additional information about the context in which queries are being made could further our understanding of users’ information

  1. The Medicago truncatula gene expression atlas web server

    Directory of Open Access Journals (Sweden)

    Tang Yuhong

    2009-12-01

    Full Text Available Abstract Background Legumes (Leguminosae or Fabaceae play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible

  2. Effect of video server topology on contingency capacity requirements

    Science.gov (United States)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  3. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  4. Mastering Citrix XenServer

    CERN Document Server

    Reed, Martez

    2014-01-01

    If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.

  5. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2012-01-01

    Written by Denny Cherry, a Microsoft MVP for the SQL Server product, a Microsoft Certified Master for SQL Server 2008, and one of the biggest names in SQL Server today, Securing SQL Server, Second Edition explores the potential attack vectors someone can use to break into your SQL Server database as well as how to protect your database from these attacks. In this book, you will learn how to properly secure your database from both internal and external threats using best practices and specific tricks the author uses in his role as an independent consultant while working on some of the largest

  6. Securing SQL server protecting your database from attackers

    CERN Document Server

    Cherry, Denny

    2015-01-01

    SQL server is the most widely-used database platform in the world, and a large percentage of these databases are not properly secured, exposing sensitive customer and business data to attack. In Securing SQL Server, Third Edition, you will learn about the potential attack vectors that can be used to break into SQL server databases as well as how to protect databases from these attacks. In this book, Denny Cherry - a Microsoft SQL MVP and one of the biggest names in SQL server - will teach you how to properly secure an SQL server database from internal and external threats using best practic

  7. Server farms with setup costs

    NARCIS (Netherlands)

    Gandhi, A.; Harchol-Balter, M.; Adan, I.J.B.F.

    2010-01-01

    In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.

  8. TBI server: a web server for predicting ion effects in RNA folding.

    Science.gov (United States)

    Zhu, Yuhong; He, Zhaojian; Chen, Shi-Jie

    2015-01-01

    Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects. The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects. By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  9. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  10. California-Nevada uranium logging. Final report

    International Nuclear Information System (INIS)

    1981-04-01

    The purpose of this project was to obtain geophysical logs of industry drill holes to assess the uranium resource potential of geologic formations of interest. The work was part of the US Department of Energy's National Uranium Resource Evaluation (NURE) Program. The principal objective of the logging program was to determine radioelement grade of formations through natural gamma ray detectors. Supplementary information was obtained from resistivity (R), self-potential (SP), point resistance (RE), and neutron density (NN) logs for formation interpretation. Additional data for log interpretation was obtained from caliper logs, casing schedules, and downhole temperature. This data was obtained from well operators when available, with new logs obtained where not formerly available. This report contains a summary of the project and data obtained to date

  11. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Multimedia

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  12. SNG-logs at the Bagsvaerd Lake

    International Nuclear Information System (INIS)

    Korsbech, U.

    1992-11-01

    Spectral Natural Gamma-logs (SNG) were taken in old boreholes around Bagsvaerd Lake (Zealand). The purpose of this investigation was to clarify the geologic/lithologic conditions in this region and the potential risks of waste penetration into ground water. Relationship curves for thorium, uranium and potassium concentrations are given. Some special logs which can be useful for evaluating concentration variations or transition forms among various lithological layers are collected. Appendices contain technical information on the boreholes and discussion of differences between results of SNG-logging and the conventional gamma-logging. (EG)

  13. CalFitter: a web server for analysis of protein thermal denaturation data.

    Science.gov (United States)

    Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri

    2018-05-14

    Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.

  14. Estimation of the non records logs from existing logs using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Mehdi Mohammad Salehi

    2017-12-01

    Full Text Available Finding the information of the hydrocarbon reservoirs from well logs is one of the main objectives of the engineers. But, missing the log records (due to many reasons such as broken instruments, unsuitable borehole and etc. is a major challenge to achieve it. Prediction of the density and resistivity logs (Rt, DT and LLS from the conventional wire-line logs in one of the Iranian southwest oil fields is the main purpose of this study. Multilayer neural network was applied to develop an intelligent predictive model for prediction of the logs. A total of 3000 data sets from 3 wells (A, B and C of the studied field were used. Among them, the data of A, B and C wells were used to constructing and testing the model, respectively. To evaluate the performance of the model, the mean square error (MSE and correlation coefficient (R2 in the test data were calculated. A comparison between the MSE of the proposed model and recently intelligent models shows that the proposed model is more accurate than others. Acceptable accuracy and using conventional well logging data are the highlight advantages of the proposed intelligent model.

  15. Mud Logging; Control geologico en perforaciones petroliferas (Mud Logging)

    Energy Technology Data Exchange (ETDEWEB)

    Pumarega Lafuente, J.C.

    1994-12-31

    Mud Logging is an important activity in the oil field and it is a key job in drilling operations, our duties are the acquisition, collection and interpretation of the geological and engineering data at the wellsite, also inform the client immediately of any significant changes in the well. (Author)

  16. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  17. Experience of public procurement of Open Compute servers

    Science.gov (United States)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  18. Analysis of free SSL/TLS Certificates and their implementation as Security Mechanism in Application Servers.

    Directory of Open Access Journals (Sweden)

    Mario E. Cueva Hurtado

    2017-02-01

    Full Text Available Security in the application layer (SSL, provides the confidentiality, integrity, and authenticity of the data, between two applications that communicate with each other. This article is the result of having implemented Free SSL / TLS Certificates in application servers, determining the relevant characteristics that must have a SSL/TLS certificate, the Certifying Authority generate it. A vulnerability analysis is developed in application servers and encrypted communications channel is established to protect against attacks such as man in the middle, phishing and maintaining the integrity of information that is transmitted between the client and server.

  19. MEMBANGUN SERVER BERBASIS LINUX PADA JARINGAN LAN DI LABOR SISTEM INFORMASI JURUSAN TEKNOLOGI INFORMASI POLITEKNIK NEGERI PADANG

    Directory of Open Access Journals (Sweden)

    Fifi Rasyidah

    2014-03-01

    Full Text Available The System Information Laboratory of Information Technology Department Polytechnic State of Padang has 30 units computer as education facilities to support learning process. All of computers used at same time in a learning section. This case causing trouble to monitoring each students activities. In order to get the solution for the lecturer, the writer then construct a server by using Linux operation system and client by using windows system operation in which Samba File Server is needed. By using this samba, the lecturer will be able to share the data and will be able to use the server as data storage media. Besides that, the writer will also use VNC (Virtual network connection to simplify the process of monitoring and supervising client working system. Based on the result gotten after the writer done some experiment, it can be concluded that Samba File Server can also be used after some configuration is applied on certain files. Moreover, the writer also conclude that VNC can control the entire of the client. The writer suggests that Samba File server which will be used is the latest version one which has more feature than the previous one, it is suggested that the configuration of VNC is applied on Ubuntu Linux since the service is available. Kata Kunci : Samba File Server, VNC, Ubuntu installation

  20. DIANA-microT web server: elucidating microRNA functions through target prediction.

    Science.gov (United States)

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  1. GPCR & company: databases and servers for GPCRs and interacting partners.

    Science.gov (United States)

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling.

  2. SEGEL: A Web Server for Visualization of Smoking Effects on Human Lung Gene Expression.

    Science.gov (United States)

    Xu, Yan; Hu, Brian; Alnajm, Sammy S; Lu, Yin; Huang, Yangxin; Allen-Gipson, Diane; Cheng, Feng

    2015-01-01

    Cigarette smoking is a major cause of death worldwide resulting in over six million deaths per year. Cigarette smoke contains complex mixtures of chemicals that are harmful to nearly all organs of the human body, especially the lungs. Cigarette smoking is considered the major risk factor for many lung diseases, particularly chronic obstructive pulmonary diseases (COPD) and lung cancer. However, the underlying molecular mechanisms of smoking-induced lung injury associated with these lung diseases still remain largely unknown. Expression microarray techniques have been widely applied to detect the effects of smoking on gene expression in different human cells in the lungs. These projects have provided a lot of useful information for researchers to understand the potential molecular mechanism(s) of smoke-induced pathogenesis. However, a user-friendly web server that would allow scientists to fast query these data sets and compare the smoking effects on gene expression across different cells had not yet been established. For that reason, we have integrated eight public expression microarray data sets from trachea epithelial cells, large airway epithelial cells, small airway epithelial cells, and alveolar macrophage into an online web server called SEGEL (Smoking Effects on Gene Expression of Lung). Users can query gene expression patterns across these cells from smokers and nonsmokers by gene symbols, and find the effects of smoking on the gene expression of lungs from this web server. Sex difference in response to smoking is also shown. The relationship between the gene expression and cigarette smoking consumption were calculated and are shown in the server. The current version of SEGEL web server contains 42,400 annotated gene probe sets represented on the Affymetrix Human Genome U133 Plus 2.0 platform. SEGEL will be an invaluable resource for researchers interested in the effects of smoking on gene expression in the lungs. The server also provides useful information

  3. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  4. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    Science.gov (United States)

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  5. A Universal Logging System for LHCb Online

    International Nuclear Information System (INIS)

    Nikolaidis, Fotis; Brarda, Loic; Garnier, Jean-Christophe; Neufeld, Niko

    2011-01-01

    A log is recording of system's activity, aimed to help system administrator to traceback an attack, find the causes of a malfunction and generally with troubleshooting. The fact that logs are the only information an administrator may have for an incident, makes logging system a crucial part of an IT infrastructure. In large scale infrastructures, such as LHCb Online, where quite a few GB of logs are produced daily, it is impossible for a human to review all of these logs. Moreover, a great percentage of them as just n oise . That makes clear that a more automated and sophisticated approach is needed. In this paper, we present a low-cost centralized logging system which allow us to do in-depth analysis of every log.

  6. Windows Terminal Servers Orchestration

    Science.gov (United States)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  7. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  8. Optimizing queries in SQL Server 2008

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2010-05-01

    Full Text Available Starting from the need to develop efficient IT systems, we intend to review theoptimization methods and tools that can be used by SQL Server database administratorsand developers of applications based on Microsoft technology, focusing on the latestversion of the proprietary DBMS, SQL Server 2008. We’ll reflect on the objectives tobe considered in improving the performance of SQL Server instances, we will tackle themostly used techniques for analyzing and optimizing queries and we will describe the“Optimize for ad hoc workloads”, “Plan Freezing” and “Optimize for unknown" newoptions, accompanied by relevant code examples.

  9. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    Science.gov (United States)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast

  10. Experience with Server Self Service Center (S3C)

    International Nuclear Information System (INIS)

    Sucik, Juraj; Bukowiec, Sebastian

    2010-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft (registered) Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  11. Experience with Server Self Service Center (S3C)

    CERN Multimedia

    Sucik, J

    2009-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft® Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  12. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Iris [Hoi; Greenberg, Steve; Mahdavi, Roozbeh; Brown, Richard; Tschudi, William

    2014-08-11

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 small server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.

  13. Processing of gamma-ray spectrometric logs

    International Nuclear Information System (INIS)

    Umiastowski, K.; Dumesnil, P.

    1984-10-01

    CEA (Commissariat a l'Energie Atomique) has developped a gamma-ray spectrometric tool, containing an analog-to-digital converter. This new tool permits to perform very precise uranium logs (natural gamma-ray spectrometry), neutron activation logs and litho-density logs (gamma-gamma spectrometric logs). Specific processing methods were developped to treate the particular problems of down-hole gamma-ray spectrometry. Extraction of the characteristic gamma-ray peak, even if they are superposed on the background radiation of very high intensity, is possible. This processing methode enables also to obtain geological informations contained in the continuous background of the spectrum. Computer programs are written in high level language for SIRIUS (VICTOR) and APOLLO computers. Exemples of uranium and neutron activation logs treatment are presented [fr

  14. Server-Aided Two-Party Computation with Simultaneous Corruption

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Damgård, Ivan Bjerre; Ranellucci, Samuel

    We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal composab......We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal...

  15. An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned

    Science.gov (United States)

    Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)

    2000-01-01

    This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.

  16. Log-inject-log in sand consolidation

    International Nuclear Information System (INIS)

    Murphy, R.P.; Spurlock, J.W.

    1977-01-01

    A method is described for gathering information for the determination of the adequacy of placement of sand consolidating plastic for sand control in oil and gas wells. The method uses a high neutron cross-section tracer which becomes part of the plastic and uses pulsed neutron logging before and after injection of the plastic. Preferably, the method uses lithium, boron, indium, and/or cadmium tracers. Boron oxide is especially useful and can be dissolved in alcohol and mixed with the plastic ingredients

  17. DMINDA: an integrated web server for DNA motif identification and analyses.

    Science.gov (United States)

    Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying

    2014-07-01

    DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Cluster Server IPTV dengan Penjadwalan Algoritma Round Robin

    Directory of Open Access Journals (Sweden)

    Didik Aribowo

    2016-03-01

    Full Text Available Perkembangan teknologi informasi yang pesat, otomatis seiring juga dengan meningkatnya para pengguna yang terhubung pada jaringan internet. Berawal dari sebuah single server yang selalu mendapatkan request dari banyak user, perlahan tapi pasti akan terjadi overload dan crash sehingga berdampak pada request yang tidak dapat dilayani oleh single server. Desain arsitektur cluster dapat dibangun dengan menggunakan konsep network load balancing yang memungkinkan proses pengolahan data di share ke dalam beberapa komputer. Dalam penelitian ini menggunakan algoritma penjadwalan round robin sebagai solusi alternatif mengatasi permasalah overload data pada server yang dapat mempengaruhi kinerja sistem IPTV. Untuk  jumlah request yang digunakan dalam penelitian ini adalah 5000, 15000, 25000, dan 50000 request. Dengan metode tersebut, maka performansi algoritma penjawalan dapat diamati dengan menekankan pada parameter sebagai berikut, yaitu throughput, respon time, reply connection, dan error connection sehingga didapatkan algoritma penjadwalan terbaik dalam rangka optimalisasi cluster server IPTV. Secara otomatis dalam proses load balancing mampu mengurangi beban kerja setiap server sehingga tidak ada server yang overload dan memungkinkan server  menggunakan bandwidth  yang tersedia secara lebih efektif serta menyediakan akses yang cepat ke web browser yang dihosting. Implementasi webserver cluster dengan skema load balancing dapat memberikan alvalaibilitas sistem yang tetap terjaga dan skalabilitas yang cukup untuk dapat tetap melayani setiap request dari pengguna

  19. NOBAI: a web server for character coding of geometrical and statistical features in RNA structure

    Science.gov (United States)

    Knudsen, Vegeir; Caetano-Anollés, Gustavo

    2008-01-01

    The Numeration of Objects in Biology: Alignment Inferences (NOBAI) web server provides a web interface to the applications in the NOBAI software package. This software codes topological and thermodynamic information related to the secondary structure of RNA molecules as multi-state phylogenetic characters, builds character matrices directly in NEXUS format and provides sequence randomization options. The web server is an effective tool that facilitates the search for evolutionary history embedded in the structure of functional RNA molecules. The NOBAI web server is accessible at ‘http://www.manet.uiuc.edu/nobai/nobai.php’. This web site is free and open to all users and there is no login requirement. PMID:18448469

  20. La competitividad logística en Latinoamérica: índice logístico vs. propuesta metodológica

    Directory of Open Access Journals (Sweden)

    Marco Alberto Valenzo Jiménez

    2016-02-01

    Full Text Available Este artículo muestra la situación actual de la competitividad logística en los países latinoamericanos, tomando como base el informe denominado “Connecting to Compite” (Trade Logistics in the Global Economy, 2007 del Banco Mundial. Este informe muestra el ranking de las posiciones que tienen los países latinoamericanos en materia de logística y en el cual se analizan las variables —aduanas, infraestructura, embarques internacionales, competencia logística, trazabilidad y seguimiento, costos logísticos domésticos y tiempo de entrega— que contiene la metodología utilizada por el Banco Mundial. La plataforma de estudio de este artículo parte del índice de competitividad logística, que, mediante la aplicación de la metodología “Valenzo – Martínez” permite analizar la base de datos de una manera más profunda, y da como resultado una escala de competitividad logística que nos muestra el nivel general de la competitividad logística en Latinoamérica, así como también el nivel de competitividad por variable, para, de esta manera, permitir una fácil interpretación de los resultados.

  1. Geophysical well logging operations and log analysis in Geothermal Well Desert Peak No. B-23-1

    Energy Technology Data Exchange (ETDEWEB)

    Sethi, D.K.; Fertl, W.H.

    1980-03-01

    Geothermal Well Desert Peak No. B-23-1 was logged by Dresser Atlas during April/May 1979 to a total depth of 2939 m (9642 ft). A temperature of 209/sup 0/C (408/sup 0/F) was observed on the maximum thermometer run with one of the logging tools. Borehole tools rated to a maximum temperature of 204.4/sup 0/C (400/sup 0/F) were utilized for logging except for the Densilog tool, which was from the other set of borehole instruments, rated to a still higher temperature, i.e., 260/sup 0/C (500/sup 0/F). The quality of the logs recorded and the environmental effects on the log response have been considered. The log response in the unusual lithologies of igneous and metamorphic formations encountered in this well could be correlated with the drill cutting data. An empirical, statistical log interpretation approach has made it possible to obtain meaningful information on the rocks penetrated. Various crossplots/histograms of the corrected log data have been generated on the computer. These are found to provide good resolution between the lithological units in the rock sequence. The crossplotting techniques and the statistical approach were combined with the drill cutting descriptions in order to arrive at the lithological characteristics. The results of log analysis and recommendations for logging of future wells have been included.

  2. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    Science.gov (United States)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  3. Maintenance in Single-Server Queues: A Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Najeeb Al-Matar

    2009-01-01

    examine a single-server queue with bulk input and secondary work during server's multiple vacations. When the buffer contents become exhausted the server leaves the system to perform some diagnostic service of a minimum of L jobs clustered in packets of random sizes (event A. The server is not supposed to stay longer than T units of time (event B. The server returns to the system when A or B occurs, whichever comes first. On the other hand, he may not break service of a packet in a middle even if A or B occurs. Furthermore, the server waits for batches of customers to arrive if upon his return the queue is still empty. We obtain a compact and explicit form functional for the queueing process in equilibrium.

  4. Locating Nearby Copies of Replicated Internet Servers

    National Research Council Canada - National Science Library

    Guyton, James D; Schwartz, Michael F

    1995-01-01

    In this paper we consider the problem of choosing among a collection of replicated servers focusing on the question of how to make choices that segregate client/server traffic according to network topology...

  5. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  6. Server hardware trends

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.

  7. On-line single server dial-a-ride problems

    NARCIS (Netherlands)

    Feuerstein, E.; Stougie, L.

    1998-01-01

    In this paper results on the dial-a-ride problem with a single server are presented. Requests for rides consist of two points in a metric space, a source and a destination. A ride has to be made by the server from the source to the destination. The server travels at unit speed in the metric space

  8. Personalized Pseudonyms for Servers in the Cloud

    OpenAIRE

    Xiao Qiuyu; Reiter Michael K.; Zhang Yinqian

    2017-01-01

    A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”), ...

  9. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2011-01-01

    There is a lot at stake for administrators taking care of servers, since they house sensitive data like credit cards, social security numbers, medical records, and much more. In Securing SQL Server you will learn about the potential attack vectors that can be used to break into your SQL Server database, and how to protect yourself from these attacks. Written by a Microsoft SQL Server MVP, you will learn how to properly secure your database, from both internal and external threats. Best practices and specific tricks employed by the author will also be revealed. Learn expert techniques to protec

  10. Implementation of SRPT Scheduling in Web Servers

    National Research Council Canada - National Science Library

    Harchol-Balter, Mor

    2000-01-01

    .... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...

  11. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  12. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  13. Nuclear well logging in hydrology

    International Nuclear Information System (INIS)

    1971-01-01

    they are described in detail elsewhere. The tracer techniques which have been included involve the use of well-logging methods to locate isotopic tracers inserted either in an adjacent borehole or in the same borehole as that in which the logs are made. Throughout the report, sufficient references have been selected to ensure that proven methods are adequately represented, but a comprehensive bibliography is not included. The International Atomic Energy Agency, at the request of the Coordinating Council of the International Hydrological Decade, is providing the Secretariat for the Working Group on Nuclear Techniques in Hydrology of the International Hydrological Decade (IHD). The Working Group and Secretariat have contributed to and coordinated the preparation of this report as well as an earlier more general report, Guidebook on Nuclear Techniques in Hydrology, IAEA Technical Reports Series No.91. Nuclear logging, along with other borehole geophysical methods, was adopted and developed primarily by the petroleum industry for use in exploration and developmental work. The information in this report shows that nuclear logging may also be useful in hydrology. Qualitative and under proper conditions quantitative interpretations about the physical, chemical, petrographic and hydraulic properties of formations and their contained fluids can be made from nuclear logs. The IHD Working Group on Nuclear Techniques in Hydrology, during its fourth meeting (in 1969), considered in detail the present status of nuclear logging with respect to hydrological investigations. Particularly it considered: (1) whether suitable equipment is at present available; (2) whether it could fulfil the need of hydrologists today; and (3) whether it was yet economic for use in hydrological investigations. The Working Group noted that the two main deficiencies in nuclear logging for hydrological purposes are: (1) the general lack of information in a coordinated form, and (2) the scarcity of

  14. Implementasi Cluster Server pada Raspberry Pi dengan Menggunakan Metode Load Balancing

    Directory of Open Access Journals (Sweden)

    Ridho Habi Putra

    2016-06-01

    Full Text Available Server merupakan bagian penting dalam sebuah layanan didalam jaringan komputer. Peran server dapat menentukan kualitas baik buruknya dari layanan tersebut. Kegagalan dari sebuah server bisa disebabkan oleh beberapa faktor diantaranya kerusakan perangkat keras, sistem jaringan serta aliran listrik. Salah satu solusi untuk mengatasi kegagalan server dalam suatu jaringan komputer adalah dengan melakukan clustering server.  Tujuan dari penelitian ini adalah untuk mengukur kemampuan Raspberry Pi (Raspi digunakan sebagai web server. Raspberry Pi yang digunakan menggunakan Raspberry Pi 2 Model B dengan menggunakan processor ARM Cortex-A7 berjalan pada frekuensi 900MHz dengan memiliki RAM 1GB. Sistem operasi yang digunakan pada Raspberry Pi adalah Linux Debian Wheezy. Konsep penelitian ini menggunakan empat buah perangkat Raspberry Pi dimana dua Raspi digunakan sebagai web server dan dua Raspi lainnya digunakan sebagai penyeimbang beban (Load Balancer serta database server. Metode yang digunakan dalam pembangunan cluster server ini menggunakan metode load balancing, dimana beban server bekerja secara merata di masing-masing node. Pengujian yang diterapkan dengan melakukan perbandingan kinerja dari Raspbery Pi yang menangani lalu lintas data secara tunggal tanpa menggunakan load balancer serta pengujian Raspberry Pi dengan menggunakan load balancer sebagai beban penyeimbang antara anggota cluster server.

  15. Energy-Reduction Offloading Technique for Streaming Media Servers

    Directory of Open Access Journals (Sweden)

    Yeongpil Cho

    2016-01-01

    Full Text Available Recent growth in popularity of mobile video services raises a demand for one of the most popular and convenient methods of delivering multimedia data, video streaming. However, heterogeneity of currently existing mobile devices involves an issue of separate video transcoding for each type of mobile devices such as smartphones, tablet PCs, and smart TVs. As a result additional burden comes to media servers, which pretranscode multimedia data for number of clients. Regarding even higher increase of video data in the Internet in the future, the problem of media servers overload is impending. To struggle against the problem an offloading method is introduced in this paper. By the use of SorTube offloading framework video transcoding process is shifted from the centralized media server to the local offloading server. Thus, clients can receive personally customized video stream; meanwhile the overload of centralized servers is reduced.

  16. The Development of Mobile Server for Language Courses

    OpenAIRE

    Tokumoto, Hiroko; Yoshida, Mitsunobu

    2009-01-01

    The aim of this paper is to introduce the conceptual design of the mobile server software "MY Server" for language teaching drafted by Tokumoto. It is to report how this software is designed and adopted effectively to Japanese language teaching. Most of the current server systems for education require facilities in a big scale including high-spec server machines, professional administrators, which naturally result in big budget projects that individual teachers or small size schools canno...

  17. Getting started with SQL Server 2014 administration

    CERN Document Server

    Ellis, Gethyn

    2014-01-01

    This is an easytofollow handson tutorial that includes real world examples of SQL Server 2014's new features. Each chapter is explained in a stepbystep manner which guides you to implement the new technology.If you want to create an highly efficient database server then this book is for you. This book is for database professionals and system administrators who want to use the added features of SQL Server 2014 to create a hybrid environment, which is both highly available and allows you to get the best performance from your databases.

  18. Locating Hidden Servers

    National Research Council Canada - National Science Library

    Oeverlier, Lasse; Syverson, Paul F

    2006-01-01

    .... Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship...

  19. A study of factors affecting the adoption of server virtualization technology

    Science.gov (United States)

    Lu, Hsin-Ke; Lin, Peng-Chun; Chiang, Chang-Heng; Cho, Chien-An

    2018-04-01

    It has become a trend that worldwide enterprises and organizations apply new technologies to improve their operations; besides, it has higher cost and less flexibility to construct and manage traditional servers, therefore the current mainstream is to use server virtualization technology. However, from these new technology organizations will not necessarily get the expected benefits because each one has its own level of organizational complexity and abilities to accept changes. The researcher investigated key factors affecting the adoption of virtualization technology through two phases. In phase I, the researcher reviewed literature and then applied the dimensions of "Information Systems Success Model" (ISSM) to generalize the factors affecting the adoption of virtualization technology to be the preliminary theoretical framework and develop a questionnaire; in phase II, a three-round Delphi Method was used to integrate the opinions of experts from related fields which were then gradually converged in order to obtain a stable and objective questionnaire of key factors so that these results were expected to provide references for organizations' adoption of server virtualization technology and future studies.

  20. Man vs. Machine: Differences in SPARQL Queries

    NARCIS (Netherlands)

    Rietveld, L.; Hoekstra, R.

    2014-01-01

    Server-side SPARQL query logs have been a topic of study for some time now. The USEWOD collection of query logs is currently the primary source of information for researchers. A recurring problem is that these logs leave application queries and queries created by humans indistinguishable. In this

  1. Energy-efficient server management; Energieeffizientes Servermanagement

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, B.

    2003-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a project that aimed to develop an automatic shut-down system for the servers used in typical electronic data processing installations to be found in small and medium-sized enterprises. The purpose of shutting down these computers - the saving of energy - is discussed. The development of a shutdown unit on the basis of a web-server that automatically shuts down the servers connected to it and then interrupts their power supply is described. The functions of the unit, including pre-set times for switching on and off, remote operation via the Internet and its interaction with clients connected to it are discussed. Examples of the system's user interface are presented.

  2. Analysis of Java Distributed Architectures in Designing and Implementing a Client/Server Database System

    National Research Council Canada - National Science Library

    Akin, Ramis

    1998-01-01

    .... Information is scattered throughout organizations and must be easily accessible. A new solution is needed for effective and efficient management of data in today's distributed client/server environment...

  3. Information Interpretation Code For Providing Secure Data Integrity On Multi-Server Cloud Infrastructure

    OpenAIRE

    Sathiya Moorthy Srinivsan; Chandrasekar Chaillah

    2014-01-01

    Data security is one of the biggest concerns in cloud computing environment. Although the advantages of storing data in cloud computing environment is extremely high, there arises a problem related to data missing. CyberLiveApp (CLA) supports secure application development between multiple users, even though cloud users distinguish their vision privileges during storing of data. But CyberLiveApp failed to integrate the system with certain cloud-based computing environments on multi-server. En...

  4. Car insurance information management system

    OpenAIRE

    Sun, Yu

    2015-01-01

    A customer information system is a typical information management system. It involves three aspects, the backstage database establishment, the application development and the system maintenance. A car insurance information management system is based on browser/server structure. Microsoft SQL Server establishes the backstage database. Active Server Pages, from Microsoft as well is used as the interface layer. The objective of this thesis was to apply ASP to the dynamic storage of a web page...

  5. Pro SQL Server 2012 relational database design and implementation

    CERN Document Server

    Davidson, Louis

    2012-01-01

    Learn effective and scalable database design techniques in a SQL Server environment. Pro SQL Server 2012 Relational Database Design and Implementation covers everything from design logic that business users will understand, all the way to the physical implementation of design in a SQL Server database. Grounded in best practices and a solid understanding of the underlying theory, Louis Davidson shows how to "get it right" in SQL Server database design and lay a solid groundwork for the future use of valuable business data. Gives a solid foundation in best practices and relational theory Covers

  6. Foundations of SQL Server 2008 R2 Business Intelligence

    CERN Document Server

    Fouche, Guy

    2011-01-01

    Foundations of SQL Server 2008 R2 Business Intelligence introduces the entire exciting gamut of business intelligence tools included with SQL Server 2008. Microsoft has designed SQL Server 2008 to be more than just a database. It's a complete business intelligence (BI) platform. The database is at its core, and surrounding the core are tools for data mining, modeling, reporting, analyzing, charting, and integration with other enterprise-level software packages. SQL Server 2008 puts an incredible amount of BI functionality at your disposal. But how do you take advantage of it? That's what this

  7. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  8. Geophysical borehole logging

    International Nuclear Information System (INIS)

    McCann, D.; Barton, K.J.; Hearn, K.

    1981-08-01

    Most of the available literature on geophysical borehole logging refers to studies carried out in sedimentary rocks. It is only in recent years that any great interest has been shown in geophysical logging in boreholes in metamorphic and igneous rocks following the development of research programmes associated with geothermal energy and nuclear waste disposal. This report is concerned with the programme of geophysical logging carried out on the three deep boreholes at Altnabreac, Caithness, to examine the effectiveness of these methods in crystalline rock. Of particular importance is the assessment of the performance of the various geophysical sondes run in the boreholes in relation to the rock mass properties. The geophysical data can be used to provide additional in-situ information on the geological, hydrogeological and engineering properties of the rock mass. Fracturing and weathering in the rock mass have a considerable effect on both the design parameters for an engineering structure and the flow of water through the rock mass; hence, the relation between the geophysical properties and the degree of fracturing and weathering is examined in some detail. (author)

  9. Professional Team Foundation Server 2012

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2012-01-01

    A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins

  10. Towards an entropy-based analysis of log variability

    DEFF Research Database (Denmark)

    Back, Christoffer Olling; Debois, Søren; Slaats, Tijs

    2017-01-01

    the development of hybrid miners: given a (sub-)log, can we determine a priori whether the log is best suited for imperative or declarative mining? We propose using the concept of entropy, commonly used in information theory. We consider different measures for entropy that could be applied and show through...... experimentation on both synthetic and real-life logs that these entropy measures do indeed give insights into the complexity of the log and can act as an indicator of which mining paradigm should be used....

  11. Towards an Entropy-based Analysis of Log Variability

    DEFF Research Database (Denmark)

    Back, Christoffer Olling; Debois, Søren; Slaats, Tijs

    2018-01-01

    the development of hybrid miners: given a log, can we determine a priori whether the log is best suited for imperative or declarative mining? We propose using the concept of entropy, commonly used in information theory. We consider different measures for entropy that could be applied and show through...... experimentation on both synthetic and real-life logs that these entropy measures do indeed give insights into the complexity of the log and can act as an indicator of which mining paradigm should be used....

  12. IBM WebSphere Application Server 80 Administration Guide

    CERN Document Server

    Robinson, Steve

    2011-01-01

    IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.

  13. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    2009-01-01

    This paper considers polling systems with an autonomous server that remain at a queue for an exponential amount of time before moving to a next queue incurring a generally distributed switch-over time. The server remains at a queue until the exponential visit time expires, also when the queue

  14. Single-server queues with spatially distributed arrivals

    NARCIS (Netherlands)

    Kroese, Dirk; Schmidt, Volker

    1994-01-01

    Consider a queueing system where customers arrive at a circle according to a homogeneous Poisson process. After choosing their positions on the circle, according to a uniform distribution, they wait for a single server who travels on the circle. The server's movement is modelled by a Brownian motion

  15. Evaluation of the Intel Nehalem-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by the CERN openlab by comparing the 4-socket, 32-core Intel Xeon X7560 server with the previous generation 4-socket server, based on the Xeon X7460 processor. The Xeon X7560 processor represents a major change in many respects, especially the memory sub-system, so it was important to make multiple comparisons. In most benchmarks the two 4-socket servers were compared. It should be underlined that both servers represent the “top of the line” in terms of frequency. However, in some cases, it was important to compare systems that integrated the latest processor features, such as QPI links, Symmetric multithreading and over-clocking via Turbo mode, and in such situations the X7560 server was compared to a dual socket L5520 based system with an identical frequency of 2.26 GHz. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following ...

  16. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  17. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431

    Energy Technology Data Exchange (ETDEWEB)

    Alliance to Save Energy; ICF Incorporated; ERG Incorporated; U.S. Environmental Protection Agency; Brown, Richard E; Brown, Richard; Masanet, Eric; Nordman, Bruce; Tschudi, Bill; Shehabi, Arman; Stanley, John; Koomey, Jonathan; Sartor, Dale; Chan, Peter; Loper, Joe; Capana, Steve; Hedman, Bruce; Duff, Rebecca; Haines, Evan; Sass, Danielle; Fanara, Andrew

    2007-08-02

    This report was prepared in response to the request from Congress stated in Public Law 109-431 (H.R. 5646),"An Act to Study and Promote the Use of Energy Efficient Computer Servers in the United States." This report assesses current trends in energy use and energy costs of data centers and servers in the U.S. (especially Federal government facilities) and outlines existing and emerging opportunities for improved energy efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

  18. AlignMe—a membrane protein sequence alignment web server

    Science.gov (United States)

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  19. Single-server blind quantum computation with quantum circuit model

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  20. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  1. A tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he

  2. Design and implementation of streaming media server cluster based on FFMpeg.

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  3. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  4. Construction of a nuclear data server using TCP/IP

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko; Sakai, Osamu [Kyushu Univ., Fukuoka (Japan)

    1997-03-01

    We construct a nuclear data server which provides data in the evaluated nuclear data library through the network by means of TCP/IP. The client is not necessarily a user but a computer program. Two examples with a prototype server program are demonstrated, the first is data transfer from the server to a user, and the second is to a computer program. (author)

  5. ArcGIS Server per la distribuzione delle informazioni del Trasporto Pubblico

    Directory of Open Access Journals (Sweden)

    Flaminia Leggeri

    2012-06-01

    ArcGIS Server for public transportation info service in Rome One goal of ATAC, the public transport operator of the Municipality of Rome, is to provide information the the daily users of the transportation network. In recent years the efforts to improve both the quantity and quality of information have reached a high level of integration in order to provide the end user with good web mapping systems available on the Internet.

  6. UNRES server for physics-based coarse-grained simulations and prediction of protein structure, dynamics and thermodynamics.

    Science.gov (United States)

    Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam

    2018-04-30

    A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.

  7. APLIKASI SERVER VIRTUAL IP UNTUK MIKROKONTROLER

    OpenAIRE

    Ashari, Ahmad

    2008-01-01

    Selama ini mikrokontroler yang terhubung ke satu komputer hanya dapat diakses melalui satu IP saja, padahal kebanyakan sistem operasi sekarang dapat memperjanjikan lebih dari satu IP untuk setiap komputer dalam bentuk virtual IP. Penelitian ini mengkaji pemanfaatan virtual IP dari IP aliasing pada sistem operasi Linux sebagai Server Virtual IP untuk mikrokontroler. Prinsip dasar Server Virtual IP adalah pembuatan Virtual Host pada masing-masing IP untuk memproses paket-paket data dan menerjem...

  8. LiveBench-1: continuous benchmarking of protein structure prediction servers.

    Science.gov (United States)

    Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L

    2001-02-01

    We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.

  9. One less trip : logging with less tripping, more protection

    Energy Technology Data Exchange (ETDEWEB)

    Byfield, M.

    2005-12-15

    New logging technology by Datalog Technology Inc. was described. Logging-while-tripping (LWT) technology uses a slim petrophysical sensor package that is moved to the targeted geological formation through a drill pipe, which reduces the exposure to vibration and shock involved in logging-while-drilling (LWD). The equipment features standard components in a patented configuration and comes in 2 segments: the receiver sub and the sensor package electronics. A receiver sub is inserted into the bottomhole assembly at the end of the drill string. Drilling progresses with the LWT sub in the bottomhole assembly until the borehole approaches the logging depth. The sensor package and electronics are then lowered into the drill string. If the well is horizontal, rig pumps push the package into the drill string until it lands in the LWT sub. Drill pipes are moved across the zone of interest and logs are recorded on downhole memory contained within the LWT package. As the logging operation progresses, a depth recorder at the surface records depth information along with the downhole recorders. When logging is completed, downhole tools are retrieved, and data downloaded from the LWT onboard memory is merged with the surface depth information to generate well logs. Retrieval via the drill string greatly reduces the risk of losing the logging gear, which contains radioactive material. Federal officials now routinely insist on extensive fishing operations to retrieve lost tools. If a well gets a gas kick while logging is in progress, the operator can still pump down mud or close the blowout preventer rams if necessary, and save time in determining where to perforate shallow gas wells. Compensated neutron logs, gamma rays, spectrum gamma rays, and induction have been tested with the LWT system. It was concluded that Petro-Canada has deployed the logs recently and has achieved results that compared satisfactorily with conventional logs. 2 figs.

  10. Getting started with SQL Server 2012 cube development

    CERN Document Server

    Lidberg, Simon

    2013-01-01

    As a practical tutorial for Analysis Services, get started with developing cubes. ""Getting Started with SQL Server 2012 Cube Development"" walks you through the basics, working with SSAS to build cubes and get them up and running.Written for SQL Server developers who have not previously worked with Analysis Services. It is assumed that you have experience with relational databases, but no prior knowledge of cube development is required. You need SQL Server 2012 in order to follow along with the exercises in this book.

  11. Design of an Electronic Healthcare Record Server Based on Part 1 of ISO EN 13606

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2011-01-01

    Full Text Available ISO EN 13606 is a newly approved standard at European and ISO levels for the meaningful exchange of clinical information between systems. Although conceived as an inter-operability standard to which existing electronic health record (EHR systems will transform legacy data, the requirements met and architectural approach reflected in this standard also make it a good candidate for the internal architecture of an EHR server. The authors have built such a server for the storage of healthcare records and demonstrated that it is possible to use ISO EN 13606 part 1 as the basis of an internal system architecture. The development of the system and some of the applications of the server are described in this paper. It is the first known operational implementation of the standard as an EHR system.

  12. On a Batch Arrival Queuing System Equipped with a Stand-by Server during Vacation Periods or the Repairs Times of the Main Server

    Directory of Open Access Journals (Sweden)

    Rehab F. Khalaf

    2011-01-01

    Full Text Available We study a queuing system which is equipped with a stand-by server in addition to the main server. The stand-by server provides service to customers only during the period of absence of the main server when either the main server is on a vacation or it is in the state of repairs due to a sudden failure from time to time. The service times, vacation times, and repair times are assumed to follow general arbitrary distributions while the stand-by service times follow exponential distribution. Supplementary variables technique has been used to obtain steady state results in explicit and closed form in terms of the probability generating functions for the number of customers in the queue, the average number of customers, and the average waiting time in the queue while the MathCad software has been used to illustrate the numerical results in this work.

  13. Comparison of Certification Authority Roles in Windows Server 2003 and Windows Server 2008

    Directory of Open Access Journals (Sweden)

    A. I. Luchnik

    2011-03-01

    Full Text Available An analysis of Certification Authority components of Microsoft server operating systems was conducted. Based on the results main directions of development of certification authorities and PKI were highlighted.

  14. HDOCK: a web server for protein–protein and protein–DNA/RNA docking based on a hybrid strategy

    Science.gov (United States)

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong

    2017-01-01

    Abstract Protein–protein and protein–DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein–protein and protein–DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10–20 min for a docking run. Tested on the cases with weakly homologous complexes of server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. PMID:28521030

  15. Grading sugar pine saw logs in trees.

    Science.gov (United States)

    John W. Henley

    1972-01-01

    Small limbs and small overgrown limbs cause problems when grading saw logs in sugar pine trees. Surface characteristics and lumber recovery information for 426 logs from 64 sugar pine trees were examined. Resulting modifications in the grading specification that allow a grader to ignore small limbs and small limb indicators do not appear to decrease the performance of...

  16. An artificial intelligence approach to well log correlation

    International Nuclear Information System (INIS)

    Startzman, R.A.; Kuo, T.B.

    1986-01-01

    This paper shows how an expert computer system was developed to correlate two well logs in at least moderately difficult situations. A four step process was devised to process log trace information and apply a set of rules to identify zonal correlations. Some of the advantages and problems with the artificial intelligence approach are shown using field logs. The approach is useful and, if properly and systematically applied, it can result in good correlations

  17. An Evaluation of Alternative Designs for a Grid Information Service

    Science.gov (United States)

    Smith, Warren; Waheed, Abdul; Meyers, David; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2001-01-01

    The Globus information service wasn't working well. There were many updates of data from Globus daemons which saturated the single server and users couldn't retrieve information. We created a second server for NASA and Alliance. Things were great on that server, but a bit slow on the other server. We needed to know exactly how the information service was being used. What were the best servers and configurations? This viewgraph presentation gives an overview of the evaluation of alternative designs for a Grid Information Service. Details are given on the workload characterization, methodology used, and the performance evaluation.

  18. Well log characterization of natural gas-hydrates

    Science.gov (United States)

    Collett, Timothy S.; Lee, Myung W.

    2012-01-01

    In the last 25 years there have been significant advancements in the use of well-logging tools to acquire detailed information on the occurrence of gas hydrates in nature: whereas wireline electrical resistivity and acoustic logs were formerly used to identify gas-hydrate occurrences in wells drilled in Arctic permafrost environments, more advanced wireline and logging-while-drilling (LWD) tools are now routinely used to examine the petrophysical nature of gas-hydrate reservoirs and the distribution and concentration of gas hydrates within various complex reservoir systems. Resistivity- and acoustic-logging tools are the most widely used for estimating the gas-hydrate content (i.e., reservoir saturations) in various sediment types and geologic settings. Recent integrated sediment coring and well-log studies have confirmed that electrical-resistivity and acoustic-velocity data can yield accurate gas-hydrate saturations in sediment grain-supported (isotropic) systems such as sand reservoirs, but more advanced log-analysis models are required to characterize gas hydrate in fractured (anisotropic) reservoir systems. New well-logging tools designed to make directionally oriented acoustic and propagation-resistivity log measurements provide the data needed to analyze the acoustic and electrical anisotropic properties of both highly interbedded and fracture-dominated gas-hydrate reservoirs. Advancements in nuclear magnetic resonance (NMR) logging and wireline formation testing (WFT) also allow for the characterization of gas hydrate at the pore scale. Integrated NMR and formation testing studies from northern Canada and Alaska have yielded valuable insight into how gas hydrates are physically distributed in sediments and the occurrence and nature of pore fluids(i.e., free water along with clay- and capillary-bound water) in gas-hydrate-bearing reservoirs. Information on the distribution of gas hydrate at the pore scale has provided invaluable insight on the mechanisms

  19. GalaxyHomomer: a web server for protein homo-oligomer structure prediction from a monomer sequence or structure.

    Science.gov (United States)

    Baek, Minkyung; Park, Taeyong; Heo, Lim; Park, Chiwook; Seok, Chaok

    2017-07-03

    Homo-oligomerization of proteins is abundant in nature, and is often intimately related with the physiological functions of proteins, such as in metabolism, signal transduction or immunity. Information on the homo-oligomer structure is therefore important to obtain a molecular-level understanding of protein functions and their regulation. Currently available web servers predict protein homo-oligomer structures either by template-based modeling using homo-oligomer templates selected from the protein structure database or by ab initio docking of monomer structures resolved by experiment or predicted by computation. The GalaxyHomomer server, freely accessible at http://galaxy.seoklab.org/homomer, carries out template-based modeling, ab initio docking or both depending on the availability of proper oligomer templates. It also incorporates recently developed model refinement methods that can consistently improve model quality. Moreover, the server provides additional options that can be chosen by the user depending on the availability of information on the monomer structure, oligomeric state and locations of unreliable/flexible loops or termini. The performance of the server was better than or comparable to that of other available methods when tested on benchmark sets and in a recent CASP performed in a blind fashion. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. The SMARTCyp cytochrome P450 metabolism prediction server

    DEFF Research Database (Denmark)

    Rydberg, Patrik; Gloriam, David Erik Immanuel; Olsen, Lars

    2010-01-01

    The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism.......The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism....

  1. Practical borehole logging procedures for mineral exploration, with emphasis on uranium

    International Nuclear Information System (INIS)

    1986-01-01

    Borehole logging is a basic tool in the exploration for and delineation of uranium deposits. This manual describes recommended procedures for carrying out borehole logging, concentrating on practical aspects of the operation of interest to those actually involved in day-to-day field work. The book begins with a discussion of boreholes and then deals with gamma ray logging as the main method of interest. Information is also provided on other techniques including resistance, spontaneous potential, density and neutron logging. Field procedures are described, and examples of logs and interpretations are given. The appendices provide information on calibration procedures and correction factors, a glossary of useful terms and some relevant basic data regarding drill holes and drilling

  2. Two Stage Secure Dynamic Load Balancing Architecture for SIP Server Clusters

    Directory of Open Access Journals (Sweden)

    G. Vennila

    2014-08-01

    Full Text Available Session Initiation Protocol (SIP is a signaling protocol emerged with an aim to enhance the IP network capabilities in terms of complex service provision. SIP server scalability with load balancing has a greater concern due to the dramatic increase in SIP service demand. Load balancing of session method (request/response and security measures optimizes the SIP server to regulate of network traffic in Voice over Internet Protocol (VoIP. Establishing a honeywall prior to the load balancer significantly reduces SIP traffic and drops inbound malicious load. In this paper, we propose Active Least Call in SIP Server (ALC_Server algorithm fulfills objectives like congestion avoidance, improved response times, throughput, resource utilization, reducing server faults, scalability and protection of SIP call from DoS attacks. From the test bed, the proposed two-tier architecture demonstrates that the ALC_Server method dynamically controls the overload and provides robust security, uniform load distribution for SIP servers.

  3. VT Route Log Points 2017

    Data.gov (United States)

    Vermont Center for Geographic Information — This data layer is used with VTrans' Integrated Route Log System (IRA). It is also used to calibrate the linear referencing systems, including the End-to-End and...

  4. Asynchronous data change notification between database server and accelerator controls system

    International Nuclear Information System (INIS)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-01-01

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  5. Importance of well logging measurements in the design of underground railway tunnels

    International Nuclear Information System (INIS)

    Kiss, E.Z.; Szlaboczky, P.

    1981-01-01

    The paper shows how logs can be used in the construction of underground railway tunnels in terciary sediments. Even standard well logging techniques (electric conductivity, gamma logging) can provide important additional information on the wells if conclusions concerning construction technology are gained from the logs. In the course of continuous research work the application of well logs renders an essential help if the measurements give in-situ information on absolute values of the well sections by revealing the various geological formations based on the distribution of characteristic parameters. Well logging increases the resolving power of the mechanical method of layer differentiation. Beside the usual geological interpretation of logs the zones of shifting rocks, hard and friable formations as well as intercalations leading to problems in construction technology can be pointed out. (author)

  6. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  7. 4DGeoBrowser: A Web-Based Data Browser and Server for Accessing and Analyzing Multi-Disciplinary Data

    National Research Council Canada - National Science Library

    Lerner, Steven

    2001-01-01

    .... Once the information is loaded onto a Geobrowser server the investigator-user is able to login to the website and use a set of data access and analysis tools to search, plot, and display this information...

  8. The design and implementation of an automated system for logging clinical experiences using an anesthesia information management system.

    Science.gov (United States)

    Simpao, Allan; Heitz, James W; McNulty, Stephen E; Chekemian, Beth; Brenn, B Randall; Epstein, Richard H

    2011-02-01

    Residents in anesthesia training programs throughout the world are required to document their clinical cases to help ensure that they receive adequate training. Current systems involve self-reporting, are subject to delayed updates and misreported data, and do not provide a practicable method of validation. Anesthesia information management systems (AIMS) are being used increasingly in training programs and are a logical source for verifiable documentation. We hypothesized that case logs generated automatically from an AIMS would be sufficiently accurate to replace the current manual process. We based our analysis on the data reporting requirements of the American College of Graduate Medical Education (ACGME). We conducted a systematic review of ACGME requirements and our AIMS record, and made modifications after identifying data element and attribution issues. We studied 2 methods (parsing of free text procedure descriptions and CPT4 procedure code mapping) to automatically determine ACGME case categories and generated AIMS-based case logs and compared these to assignments made by manual inspection of the anesthesia records. We also assessed under- and overreporting of cases entered manually by our residents into the ACGME website. The parsing and mapping methods assigned cases to a majority of the ACGME categories with accuracies of 95% and 97%, respectively, as compared with determinations made by 2 residents and 1 attending who manually reviewed all procedure descriptions. Comparison of AIMS-based case logs with reports from the ACGME Resident Case Log System website showed that >50% of residents either underreported or overreported their total case counts by at least 5%. The AIMS database is a source of contemporaneous documentation of resident experience that can be queried to generate valid, verifiable case logs. The extent of AIMS adoption by academic anesthesia departments should encourage accreditation organizations to support uploading of AIMS-based case

  9. Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n ) Update Time

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jakob, Riko

    2000-01-01

    The dynamic maintenance of the convex hull of a set of points in the plane is one of the most important problems in computational geometry. We present a data structure supporting point insertions in amortized O(log n · log log log n) time, point deletions in amortized O(log n · log log n) time......, and various queries about the convex hull in optimal O(log n) worst-case time. The data structure requires O(n) space. Applications of the new dynamic convex hull data structure are improved deterministic algorithms for the k-level problem and the red-blue segment intersection problem where all red and all...

  10. Solution for an Improved WEB Server

    Directory of Open Access Journals (Sweden)

    George PECHERLE

    2009-12-01

    Full Text Available We want to present a solution with maximum performance from a web server,in terms of services that the server provides. We do not always know what tools to useor how to configure what we have in order to get what we need. Keeping the Internetrelatedservices you provide in working condition can sometimes be a real challenge.And with the increasing demand in Internet services, we need to come up with solutionsto problems that occur every day.

  11. On the single-server retrial queue

    Directory of Open Access Journals (Sweden)

    Djellab Natalia V.

    2006-01-01

    Full Text Available In this work, we review the stochastic decomposition for the number of customers in M/G/1 retrial queues with reliable server and server subjected to breakdowns which has been the subject of investigation in the literature. Using the decomposition property of M/G/1 retrial queues with breakdowns that holds under exponential assumption for retrial times as an approximation in the non-exponential case, we consider an approximate solution for the steady-state queue size distribution.

  12. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  13. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  14. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server

    International Nuclear Information System (INIS)

    Suarez, Patricia M.; Pepe, Maria E.; Sbaffoni, Maria M.

    2000-01-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  15. FireProt: web server for automated design of thermostable proteins

    Science.gov (United States)

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  16. Ordinal Log-Linear Models for Contingency Tables

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  17. TRAP: A Three-Way Handshake Server for TCP Connection Establishment

    Directory of Open Access Journals (Sweden)

    Fu-Hau Hsu

    2016-11-01

    Full Text Available Distributed denial of service attacks have become more and more frequent nowadays. In 2013, a massive distributed denial of service (DDoS attack was launched against Spamhaus causing the service to shut down. In this paper, we present a three-way handshaking server for Transmission Control Protocol (TCP connection redirection utilizing TCP header options. When a legitimate client attempted to connect to a server undergoing an SYN-flood DDoS attack, it will try to initiate a three-way handshake. After it has successfully established a connection, the server will reply with a reset (RST packet, in which a new server address and a secret is embedded. The client can, thus, connect to the new server that only accepts SYN packets with the corrected secret using the supplied secret.

  18. Expert T-SQL window functions in SQL Server

    CERN Document Server

    Kellenberger, Kathi

    2015-01-01

    Expert T-SQL Window Functions in SQL Server takes you from any level of knowledge of windowing functions and turns you into an expert who can use these powerful functions to solve many T-SQL queries. Replace slow cursors and self-joins with queries that are easy to write and fantastically better performing, all through the magic of window functions. First introduced in SQL Server 2005, window functions came into full blossom with SQL Server 2012. They truly are one of the most notable developments in SQL in a decade, and every developer and DBA can benefit from their expressive power in sol

  19. A Fuzzy Control Course on the TED Server

    DEFF Research Database (Denmark)

    Dotoli, Mariagrazia; Jantzen, Jan

    1999-01-01

    , an educational server that serves as a learning central for students and professionals working with fuzzy logic. Through the server, TED offers an online course on fuzzy control. The course concerns automatic control of an inverted pendulum, with a focus on rule based control by means of fuzzy logic. A ball......The Training and Education Committee (TED) is a committee under ERUDIT, a Network of Excellence for fuzzy technology and uncertainty in Europe. The main objective of TED is to improve the training and educational possibilities for the nodes of ERUDIT. Since early 1999, TED has set up the TED server...

  20. Aplikasi Billing Client/Server Dengan Mengunakan Microsoft Visual Basic 6.0

    OpenAIRE

    Sinukaban, Eva Solida

    2010-01-01

    Kajian ini bertujuan untuk membangun Billing Server yang gratis dalam jaringan Local dengan media transmisi berupa kabel UTP atau Wifi, Jaringan LAN yang dibangun ini merupakan jaringan client server yang memiliki server dengan sistem operasi yang dipakai adalah windows XP Service Pack 2. Tujuan pembuatan Aplikasi Billing Server ini adalah untuk dapat melakukan sharing data dan komunikasi antar komputer sehingga komputer-komputer tersebut dapat dimanfaatkan seoptimal mungkin baik dari sisi Se...

  1. HDOCK: a web server for protein-protein and protein-DNA/RNA docking based on a hybrid strategy.

    Science.gov (United States)

    Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong; Huang, Sheng-You

    2017-07-03

    Protein-protein and protein-DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein-protein and protein-DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10-20 min for a docking run. Tested on the cases with weakly homologous complexes of server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. The Role of the Web Server in a Capstone Web Application Course

    Science.gov (United States)

    Umapathy, Karthikeyan; Wallace, F. Layne

    2010-01-01

    Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…

  3. Improvements to the National Transport Code Collaboration Data Server

    Science.gov (United States)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  4. Openlobby: an open game server for lobby and matchmaking

    Science.gov (United States)

    Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.

    2018-03-01

    Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.

  5. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    Science.gov (United States)

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. PseKNC: a flexible web server for generating pseudo K-tuple nucleotide composition.

    Science.gov (United States)

    Chen, Wei; Lei, Tian-Yu; Jin, Dian-Chuan; Lin, Hao; Chou, Kuo-Chen

    2014-07-01

    The pseudo oligonucleotide composition, or pseudo K-tuple nucleotide composition (PseKNC), can be used to represent a DNA or RNA sequence with a discrete model or vector yet still keep considerable sequence order information, particularly the global or long-range sequence order information, via the physicochemical properties of its constituent oligonucleotides. Therefore, the PseKNC approach may hold very high potential for enhancing the power in dealing with many problems in computational genomics and genome sequence analysis. However, dealing with different DNA or RNA problems may need different kinds of PseKNC. Here, we present a flexible and user-friendly web server for PseKNC (at http://lin.uestc.edu.cn/pseknc/default.aspx) by which users can easily generate many different modes of PseKNC according to their need by selecting various parameters and physicochemical properties. Furthermore, for the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the current web server to generate their desired PseKNC without the need to follow the complicated mathematical equations, which are presented in this article just for the integrity of PseKNC formulation and its development. It is anticipated that the PseKNC web server will become a very useful tool in computational genomics and genome sequence analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Using Pattern Recognition Techniques for Server Overload Detection

    NARCIS (Netherlands)

    Bezemer, C.P.; Cheplygina, V.; Zaidman, A.

    2011-01-01

    One of the key factors in customer satisfaction is application performance. To be able to guarantee good performance, it is necessary to take appropriate measures before a server overload occurs. While in small systems it is usually possible to predict server overload using a subjective human

  8. Server virtualization management of corporate network with hyper-v

    OpenAIRE

    Kovalenko, Taras

    2012-01-01

    On a paper main tasks and problems of server virtualization are considerate. Practical value of virtualization in a corporate network, advantages and disadvantages of application of server virtualization are also considerate.

  9. Empirical Analysis of Server Consolidation and Desktop Virtualization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2013-01-01

    Full Text Available Physical server transited to virtual server infrastructure (VSI and desktop device to virtual desktop infrastructure (VDI have the crucial problems of server consolidation, virtualization performance, virtual machine density, total cost of ownership (TCO, and return on investments (ROI. Besides, how to appropriately choose hypervisor for the desired server/desktop virtualization is really challenging, because a trade-off between virtualization performance and cost is a hard decision to make in the cloud. This paper introduces five hypervisors to establish the virtual environment and then gives a careful assessment based on C/P ratio that is derived from composite index, consolidation ratio, virtual machine density, TCO, and ROI. As a result, even though ESX server obtains the highest ROI and lowest TCO in server virtualization and Hyper-V R2 gains the best performance of virtual machine management; both of them however cost too much. Instead the best choice is Proxmox Virtual Environment (Proxmox VE because it not only saves the initial investment a lot to own a virtual server/desktop infrastructure, but also obtains the lowest C/P ratio.

  10. Windows Server® 2008 Inside Out

    CERN Document Server

    Stanek, William R

    2009-01-01

    Learn how to conquer Windows Server 2008-from the inside out! Designed for system administrators, this definitive resource features hundreds of timesaving solutions, expert insights, troubleshooting tips, and workarounds for administering Windows Server 2008-all in concise, fast-answer format. You will learn how to perform upgrades and migrations, automate deployments, implement security features, manage software updates and patches, administer users and accounts, manage Active Directory® directory services, and more. With INSIDE OUT, you'll discover the best and fastest ways to perform core a

  11. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Science.gov (United States)

    Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949

  12. Improving data retrieval rates using remote data servers

    International Nuclear Information System (INIS)

    D'Ottavio, T.; Frak, B.; Nemesure, S.; Morris, J.

    2012-01-01

    The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause giga-bytes of data to be read and displayed. Given that a user's patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server. (authors)

  13. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    Science.gov (United States)

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  14. Getting started with Oracle WebLogic Server 12c developer's guide

    CERN Document Server

    Nunes, Fabio Mazanatti

    2013-01-01

    Getting Started with Oracle WebLogic Server 12c is a fast-paced and feature-packed book, designed to get you working with Java EE 6, JDK 7 and Oracle WebLogic Server 12c straight away, so start developing your own applications.Getting Started with Oracle WebLogic Server 12c: Developer's Guide is written for developers who are just getting started, or who have some experience, with Java EE who want to learn how to develop for and use Oracle WebLogic Server. Getting Started with Oracle WebLogic Server 12c: Developer's Guide also provides a great overview of the updated features of the 12c releas

  15. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  16. The visualCMAT: A web-server to select and interpret correlated mutations/co-evolving residues in protein families.

    Science.gov (United States)

    Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas

    2018-04-01

    The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https

  17. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Directory of Open Access Journals (Sweden)

    Chengqi Wang

    Full Text Available With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  18. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  19. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  20. Instant Microsoft SQL Server Analysis Services 2012 dimensions and cube

    CERN Document Server

    Acharya, Anurag

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Written in a practical, friendly manner this book will take you through the journey from installing SQL Server to developing your first cubes.""Microsoft SQL Server Analysis Service 2012 Dimensions"" and Cube Starter is targeted at anyone who wants to get started with cube development in Microsoft SQL Server Analysis Services. Regardless of whether you are a SQL Server developer who knows nothing about cube development or SSAS or even OLAP, you

  1. Hardwood log grades and lumber grade yields for factory lumber logs

    Science.gov (United States)

    Leland F. Hanks; Glenn L. Gammon; Robert L. Brisbin; Everette D. Rast

    1980-01-01

    The USDA Forest Service Standard Grades for Hardwood Factory Lumber Logs are described, and lumber grade yields for 16 species and 2 species groups are presented by log grade and log diameter. The grades enable foresters, log buyers, and log sellers to select and grade those log suitable for conversion into standard factory grade lumber. By using the apropriate lumber...

  2. Client/server approach to image capturing

    Science.gov (United States)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  3. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  4. Efficient Server-Aided 2PC for Mobile Phones

    Directory of Open Access Journals (Sweden)

    Mohassel Payman

    2016-04-01

    Full Text Available Secure Two-Party Computation (2PC protocols allow two parties to compute a function of their private inputs without revealing any information besides the output of the computation. There exist low cost general-purpose protocols for semi-honest parties that can be efficiently executed even on smartphones. However, for the case of malicious parties, current 2PC protocols are significantly less efficient, limiting their use to more resourceful devices. In this work we present an efficient 2PC protocol that is secure against malicious parties and is light enough to be used on mobile phones. The protocol is an adaptation of the protocol of Nielsen et al. (Crypto, 2012 to the Server-Aided setting, a natural relaxation of the plain model for secure computation that allows the parties to interact with a server (e.g., a cloud who is assumed not to collude with any of the parties. Our protocol has two stages: In an offline stage - where no party knows which function is to be computed, nor who else is participating - each party interacts with the server and downloads a file. Later, in the online stage, when two parties decide to execute a 2PC together, they can use the files they have downloaded earlier to execute the computation with cost that is lower than the currently best semi-honest 2PC protocols. We show an implementation of our protocol for Android mobile phones, discuss several optimizations and report on its evaluation for various circuits. For example, the online stage for evaluating a single AES circuit requires only 2.5 seconds and can be further reduced to 1 second (amortized time with multiple executions.

  5. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  6. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  7. Personal Information Leaks with Automatic Login in Mobile Social Network Services

    Directory of Open Access Journals (Sweden)

    Jongwon Choi

    2015-06-01

    Full Text Available To log in to a mobile social network service (SNS server, users must enter their ID and password to get through the authentication process. At that time, if the user sets up the automatic login option on the app, a sort of security token is created on the server based on the user’s ID and password. This security token is called a credential. Because such credentials are convenient for users, they are utilized by most mobile SNS apps. However, the current state of credential management for the majority of Android SNS apps is very weak. This paper demonstrates the possibility of a credential cloning attack. Such attacks occur when an attacker extracts the credential from the victim’s smart device and inserts it into their own smart device. Then, without knowing the victim’s ID and password, the attacker can access the victim’s account. This type of attack gives access to various pieces of personal information without authorization. Thus, in this paper, we analyze the vulnerabilities of the main Android-based SNS apps to credential cloning attacks, and examine the potential leakage of personal information that may result. We then introduce effective countermeasures to resolve these problems.

  8. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    DEFF Research Database (Denmark)

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl

    2010-01-01

    analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value......Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log...

  9. Mac OS X Snow Leopard Server For Dummies

    CERN Document Server

    Rizzo, John

    2009-01-01

    Making Everything Easier!. Mac OS® X Snow Leopard Server for Dummies. Learn to::;. Set up and configure a Mac network with Snow Leopard Server;. Administer, secure, and troubleshoot the network;. Incorporate a Mac subnet into a Windows Active Directory® domain;. Take advantage of Unix® power and security. John Rizzo. Want to set up and administer a network even if you don't have an IT department? Read on!. Like everything Mac, Snow Leopard Server was designed to be easy to set up and use. Still, there are so many options and features that this book will save you heaps of time and effort. It wa

  10. Logging Concessions Enable Illegal Logging Crisis in the Peruvian Amazon

    OpenAIRE

    Finer, Matt; Jenkins, Clinton N.; Sky, Melissa A. Blue; Pine, Justin

    2014-01-01

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US?Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3%...

  11. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  12. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  13. From Server to Desktop: Capital and Institutional Planning for Client/Server Technology.

    Science.gov (United States)

    Mullig, Richard M.; Frey, Keith W.

    1994-01-01

    Beginning with a request for an enhanced system for decision/strategic planning support, the University of Chicago's biological sciences division has developed a range of administrative client/server tools, instituted a capital replacement plan for desktop technology, and created a planning and staffing approach enabling rapid introduction of new…

  14. iPhone with Microsoft Exchange Server 2010 Business Integration and Deployment

    CERN Document Server

    Goodman, Steve

    2012-01-01

    iPhone with Microsoft Exchange Server 2010 - Business Integration and Deployment is a practical, step-by-step tutorial on planning, installing and configuring Exchange Server to deploy iPhones into your business. This book is aimed at system administrators who don't necessarily know about Exchange Server 2010 or ActiveSync-based mobile devices. A basic level of knowledge around Windows Servers is expected, and knowledge of smartphones and email systems in general will make some topics a little easier.

  15. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431: Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Alliance to Save Energy; ICF Incorporated; ERG Incorporated; U.S. Environmental Protection Agency; Brown, Richard E; Brown, Richard; Masanet, Eric; Nordman, Bruce; Tschudi, Bill; Shehabi, Arman; Stanley, John; Koomey, Jonathan; Sartor, Dale; Chan, Peter; Loper, Joe; Capana, Steve; Hedman, Bruce; Duff, Rebecca; Haines, Evan; Sass, Danielle; Fanara, Andrew

    2007-08-02

    This report is the appendices to a companion report, prepared in response to the request from Congress stated in Public Law 109-431 (H.R. 5646),"An Act to Study and Promote the Use of Energy Efficient Computer Servers in the United States." This report assesses current trends in energy use and energy costs of data centers and servers in the U.S. (especially Federal government facilities) and outlines existing and emerging opportunities for improved energy efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

  16. Private information retrieval

    CERN Document Server

    Yi, Xun; Bertino, Elisa

    2013-01-01

    This book deals with Private Information Retrieval (PIR), a technique allowing a user to retrieve an element from a server in possession of a database without revealing to the server which element is retrieved. PIR has been widely applied to protect the privacy of the user in querying a service provider on the Internet. For example, by PIR, one can query a location-based service provider about the nearest car park without revealing his location to the server.The first PIR approach was introduced by Chor, Goldreich, Kushilevitz and Sudan in 1995 in a multi-server setting, where the user retriev

  17. Microsoft SQL Server OLAP Solution - A Survey

    OpenAIRE

    Badiozamany, Sobhan

    2010-01-01

    Microsoft SQL Server 2008 offers technologies for performing On-Line Analytical Processing (OLAP), directly on data stored in data warehouses, instead of moving the data into some offline OLAP tool. This brings certain benefits, such as elimination of data copying and better integration with the DBMS compared with off-line OLAP tools. This report reviews SQL Server support for OLAP, solution architectures, tools and components involved. Standard storage options are discussed but the focus of ...

  18. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2014-01-01

    Full Text Available We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users’ public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  19. CERN Document Server (CDS): Introduction

    CERN Multimedia

    CERN. Geneva; Costa, Flavio

    2017-01-01

    A short online tutorial introducing the CERN Document Server (CDS). Basic functionality description, the notion of Revisions and the CDS test environment. Links: CDS Production environment CDS Test environment  

  20. A cement channel-detection technique using the pulsed-neutron log

    International Nuclear Information System (INIS)

    Myers, G.D.

    1991-01-01

    A channel-detection technique has been developed using boron solutions and pulsed-neutron logging (PNL) tools. This technique relies on the extremely high-neutron-absorption cross section that boron exhibits relative to other common elements, including chlorine. The PNL tool is used to detect movement of a boron solution in a log-inject-log procedure. The technique has identified channels in such difficult applications as logging through two strings of pipe and in highly deviated wellbores. Logging procedures are simple and cement channels can be readily identified. The boron solutions are relatively inexpensive, safe to handle, and nonradioactive. Additional PNL information for reservoir performance evaluation is collected simultaneously during channel-detection logging. This paper describes the theory, development, field application, and limitations of this channel-detection logging technique

  1. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    Science.gov (United States)

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-06-20

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.

  2. Instant Hyper-v Server Virtualization starter

    CERN Document Server

    Eguibar, Vicente Rodriguez

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service

  3. Decision support using anesthesia information management system records and accreditation council for graduate medical education case logs for resident operating room assignments.

    Science.gov (United States)

    Wanderer, Jonathan P; Charnin, Jonathan; Driscoll, William D; Bailin, Michael T; Baker, Keith

    2013-08-01

    Our goal in this study was to develop decision support systems for resident operating room (OR) assignments using anesthesia information management system (AIMS) records and Accreditation Council for Graduate Medical Education (ACGME) case logs and evaluate the implementations. We developed 2 Web-based systems: an ACGME case-log visualization tool, and Residents Helping in Navigating OR Scheduling (Rhinos), an interactive system that solicits OR assignment requests from residents and creates resident profiles. Resident profiles are snapshots of the cases and procedures each resident has done and were derived from AIMS records and ACGME case logs. A Rhinos pilot was performed for 6 weeks on 2 clinical services. One hundred sixty-five requests were entered and used in OR assignment decisions by a single attending anesthesiologist. Each request consisted of a rank ordered list of up to 3 ORs. Residents had access to detailed information about these cases including surgeon and patient name, age, procedure type, and admission status. Success rates at matching resident requests were determined by comparing requests with AIMS records. Of the 165 requests, 87 first-choice matches (52.7%), 27 second-choice matches (16.4%), and 8 third-choice matches (4.8%) were made. Forty-three requests were unmatched (26.1%). Thirty-nine first-choice requests overlapped (23.6%). Full implementation followed on 8 clinical services for 8 weeks. Seven hundred fifty-four requests were reviewed by 15 attending anesthesiologists, with 339 first-choice matches (45.0%), 122 second-choice matches (16.2%), 55 third-choice matches (7.3%), and 238 unmatched (31.5%). There were 279 overlapping first-choice requests (37.0%). The overall combined match success rate was 69.4%. Separately, we developed an ACGME case-log visualization tool that allows individual resident experiences to be compared against case minimums as well as resident peer groups. We conclude that it is feasible to use ACGME case-log

  4. Look-ahead policies for admission to a single server loss system

    NARCIS (Netherlands)

    Nawijn, W.M.

    1990-01-01

    Consider a single server loss system in which the server, being idle, may reject or accept an arriving customer for service depending on the state at the arrival epoch. It is assumed that at every arrival epoch the server knows the service time of the arriving customer, the arrival time of the next

  5. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  6. A satellite-driven, client-server hydro-economic model prototype for agricultural water management

    Science.gov (United States)

    Maneta, Marco; Kimball, John; He, Mingzhu; Payton Gardner, W.

    2017-04-01

    Anticipating agricultural water demand, land reallocation, and impact on farm revenues associated with different policy or climate constraints is a challenge for water managers and for policy makers. While current integrated decision support systems based on programming methods provide estimates of farmer reaction to external constraints, they have important shortcomings such as the high cost of data collection surveys necessary to calibrate the model, biases associated with inadequate farm sampling, infrequent model updates and recalibration, model overfitting, or their deterministic nature, among other problems. In addition, the administration of water supplies and the generation of policies that promote sustainable agricultural regions depend on more than one bureau or office. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. To overcome these limitations, we present a client-server, integrated hydro-economic modeling and observation framework driven by satellite remote sensing and other ancillary information from regional monitoring networks. The core of the framework is a stochastic data assimilation system that sequentially ingests remote sensing observations and corrects the parameters of the hydro-economic model at unprecedented spatial and temporal resolutions. An economic model of agricultural production, based on mathematical programming, requires information on crop type and extent, crop yield, crop transpiration and irrigation technology. A regional hydro-climatologic model provides biophysical constraints to an economic model of agricultural production with a level of detail that permits the study of the spatial impact of large- and small-scale water use decisions. Crop type and extent is obtained from the Cropland Data Layer (CDL), which is multi-sensor operational classification of crops maintained by the United States Department of Agriculture. Because

  7. DNA barcode goes two-dimensions: DNA QR code web server.

    Science.gov (United States)

    Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin

    2012-01-01

    The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  8. DNA barcode goes two-dimensions: DNA QR code web server.

    Directory of Open Access Journals (Sweden)

    Chang Liu

    Full Text Available The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  9. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  10. Sending servers to Morocco

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco.   Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...

  11. Supervisory control system implemented in programmable logical controller web server

    OpenAIRE

    Milavec, Simon

    2012-01-01

    In this thesis, we study the feasibility of supervisory control and data acquisition (SCADA) system realisation in a web server of a programmable logic controller. With the introduction of Ethernet protocol to the area of process control, the more powerful programmable logic controllers obtained integrated web servers. The web server of a programmable logic controller, produced by Siemens, will also be described in this thesis. Firstly, the software and the hardware equipment used for real...

  12. KFC Server: interactive forecasting of protein interaction hot spots.

    Science.gov (United States)

    Darnell, Steven J; LeGault, Laura; Mitchell, Julie C

    2008-07-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.

  13. Communication and logging hub for rapid prototyping of environmental sensors: presenting the Smartphone.

    Science.gov (United States)

    Hut, R.

    2017-12-01

    When desiging prototype sensors for environmental variables a critical step is a comparison campaign where the new sensor is compared to current state of the art sensors. In this step one of the headaches for researchers can be connecting their sensor to a logging or communication device. I present a simple solution: to use smartphone that scans for Bluetooth Low Energy transmissions and uploads any measurement to a data server. In this way the prototype sensor only has to transmit its measurement values over BLE, which can be done using off-the-shelf components. The sensors don't have to be physically connected to the phone, allowing for very rapid deployment of sensors in locations that have a communication hub (ie. phone) installed. The communication and logging hub consists of nothing more than a low cost Android smartphone running a dedicated app. The phone is encased in a waterproof box with a large powerbank and a solar panel. I will demonstrate this live at the Fall Meeting. By installing these phones along permanent WMO certified station locations, comparisons campaigns can use the "golden standard" from the WMO without much problems.

  14. The design and implementation about the project of optimizing proxy servers

    International Nuclear Information System (INIS)

    Wu Ling; Liu Baoxu

    2006-01-01

    Proxy server is an important facility in the network of an organization, which play an important role in security and access control and accelerating access of Internet. This article introduces the action of proxy servers, and expounds the resolutions to optimize proxy servers at IHEP: integration, dynamic domain name resolves and data synchronization. (authors)

  15. Rclick: a web server for comparison of RNA 3D structures.

    Science.gov (United States)

    Nguyen, Minh N; Verma, Chandra

    2015-03-15

    RNA molecules play important roles in key biological processes in the cell and are becoming attractive for developing therapeutic applications. Since the function of RNA depends on its structure and dynamics, comparing and classifying the RNA 3D structures is of crucial importance to molecular biology. In this study, we have developed Rclick, a web server that is capable of superimposing RNA 3D structures by using clique matching and 3D least-squares fitting. Our server Rclick has been benchmarked and compared with other popular servers and methods for RNA structural alignments. In most cases, Rclick alignments were better in terms of structure overlap. Our server also recognizes conformational changes between structures. For this purpose, the server produces complementary alignments to maximize the extent of detectable similarity. Various examples showcase the utility of our web server for comparison of RNA, RNA-protein complexes and RNA-ligand structures. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Adventures in the evolution of a high-bandwidth network for central servers

    International Nuclear Information System (INIS)

    Swartz, K.L.; Cottrell, L.; Dart, M.

    1994-08-01

    In a small network, clients and servers may all be connected to a single Ethernet without significant performance concerns. As the number of clients on a network grows, the necessity of splitting the network into multiple sub-networks, each with a manageable number of clients, becomes clear. Less obvious is what to do with the servers. Group file servers on subnets and multihomed servers offer only partial solutions -- many other types of servers do not lend themselves to a decentralized model, and tend to collect on another, well-connected but overloaded Ethernet. The higher speed of FDDI seems to offer an easy solution, but in practice both expense and interoperability problems render FDDI a poor choice. Ethernet switches appear to permit cheaper and more reliable networking to the servers while providing an aggregate network bandwidth greater than a simple Ethernet. This paper studies the evolution of the server networks at SLAC. Difficulties encountered in the deployment of FDDI are described, as are the tools and techniques used to characterize the traffic patterns on the server network. Performance of Ethernet, FDDI, and switched Ethernet networks is analyzed, as are reliability and maintainability issues for these alternatives. The motivations for re-designing the SLAC general server network to use a switched Ethernet instead of FDDI are described, as are the reasons for choosing FDDI for the farm and firewall networks at SLAC. Guidelines are developed which may help in making this choice for other networks

  17. Cased-hole log analysis and reservoir performance monitoring

    CERN Document Server

    Bateman, Richard M

    2015-01-01

    This book addresses vital issues, such as the evaluation of shale gas reservoirs and their production. Topics include the cased-hole logging environment, reservoir fluid properties; flow regimes; temperature, noise, cement bond, and pulsed neutron logging; and casing inspection. Production logging charts and tables are included in the appendices. The work serves as a comprehensive reference for production engineers with upstream E&P companies, well logging service company employees, university students, and petroleum industry training professionals. This book also: ·       Provides methods of conveying production logging tools along horizontal well segments as well as measurements of formation electrical resistivity through casing ·       Covers new information on fluid flow characteristics in inclined pipe and provides new and improved nuclear tool measurements in cased wells ·       Includes updates on cased-hole wireline formation testing  

  18. Toward an Automated Labeling of Event Log Attributes

    DEFF Research Database (Denmark)

    Abbad Andaloussi, Amine; Burattin, Andrea; Weber, Barbara

    2018-01-01

    information systems often do not comply with the required maturity level, since they lack the notion of process instance, also referred in process mining as “case id”. Without a proper identification of the case id attribute in log files, the outcome of process mining algorithms is unpredictable. This paper...... proposes a new approach that aims to overcome this challenge by automatically inferring the case id attribute from log files. The approach has been implemented as a ProM plugin and evaluated with several real-world event logs. The results demonstrate a high accuracy in inferring the case id attribute.......Process mining aims at exploring the data produced by executable business processes to mine the underlying control-flow and dataflow. Most of the process mining algorithms assume the existence of an event log with a certain maturity level. Unfortunately, the logs provided by process unaware...

  19. Standby-Loss Elimination in Server Power Supply

    Directory of Open Access Journals (Sweden)

    Jong-Woo Kim

    2017-07-01

    Full Text Available In a server power system, a standby converter is required in order to provide the standby output, monitor the system’s status, and communicate with the server power system. Since these functions are always required, losses from the standby converter are produced even though the system operates in normal mode. For these reasons, the losses deteriorate the total efficiency of the system. In this paper, a new structure is proposed to eliminate the losses from the standby converter of a server power supply. The key feature of the proposed structure is that the main direct current (DC/DC converter substitutes all of the output power of the standby converter, and the standby converter is turned off in normal mode. With the proposed structure, the losses from the standby converter can be eliminated in normal mode, and this leads to a higher efficiency in overall load conditions. Although the structure has been proposed in the previous work, very important issues such as a steady state analysis, the transient responses, and how to control the standby converter are not discussed. This paper presents these issues further. The feasibility of the proposed structure has been verified with 400 V link voltage, 12 V/62.5 A main output, and a 12 V/2.1 A standby output server power system.

  20. Client-server password recovery

    NARCIS (Netherlands)

    Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  1. A Satellite Data-Driven, Client-Server Decision Support Application for Agricultural Water Resources Management

    Science.gov (United States)

    Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.

    2016-01-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that

  2. Prepare for X-Win32 - the new X11 server software for Windows computers

    CERN Multimedia

    IT Department

    2011-01-01

    Starnet X-Win32 will replace Exceed as the X11 Server software on Windows computers by February 2012. X11 Server software allows a Windows user to have a graphical user interface on a remote Linux server. This change, initially motivated by a significant change of license conditions for Exceed, brings an easier integration of Windows and Linux logon mechanisms. At the same time, X-Win32 addresses the common use cases while providing a more intuitive configuration interface. CERN Predefined Connections will be available as before. They offer an easy way of starting applications on LXPLUS using PuTTY or starting the KDE, GNOME or ICE window managers. Since X-Win32 is better integrated with SSH and CERN Kerberos compared to Exceed, it is much simpler to set up secure access to Linux services. The decision to choose X-Win32 as the new X11 software resulted from an evaluation that involved various user communities and support teams. More information, including the documented use cases, is available at https://...

  3. Analisis Perbandingan Unjuk Kerja Sistem Penyeimbang Beban Web Server dengan HAProxy dan Pound Links

    Directory of Open Access Journals (Sweden)

    Dite Ardian

    2013-04-01

    Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.

  4. Microsoft® Exchange Server 2007 Administrator's Companion

    CERN Document Server

    Glenn, Walter; Maher, Joshua

    2009-01-01

    Get your mission-critical messaging and collaboration systems up and running with the essential guide to deploying and managing Exchange Server 2007, now updated for SP1. This comprehensive administrator's reference covers the full range of server and client deployments, unified communications, security features, performance optimization, troubleshooting, and disaster recovery. It also includes four chapters on security policy, tools, and techniques to help protect messaging systems from viruses, spam, and phishing. Written by expert authors Walter Glenn and Scott Lowe, this reference deliver

  5. Log N-log S in inconclusive

    Science.gov (United States)

    Klebesadel, R. W.; Fenimore, E. E.; Laros, J.

    1983-01-01

    The log N-log S data acquired by the Pioneer Venus Orbiter Gamma Burst Detector (PVO) are presented and compared to similar data from the Soviet KONUS experiment. Although the PVO data are consistent with and suggestive of a -3/2 power law distribution, the results are not adequate at this state of observations to differentiate between a -3/2 and a -1 power law slope.

  6. Encyclopedia of well logging

    International Nuclear Information System (INIS)

    Desbrandes, R.

    1985-01-01

    The 16 chapters of this book aim to provide students, trainees and engineers with a manual covering all well-logging measurements ranging from drilling to productions, from oil to minerals going by way of geothermal energy. Each chapter is a summary but a bibliography is given at the end of each chapter. Well-logging during drilling, wireline logging equipment and techniques, petroleum logging, data processing of borehole data, interpretation of well-logging, sampling tools, completion and production logging, logging in relief wells to kill off uncontrolled blowouts, techniques for high temperature geothermal energy, small-scale mining and hydrology, logging with oil-base mud and finally recommended logging programs are all topics covered. There is one chapter on nuclear well-logging which is indexed separately. (UK)

  7. Middleware for multi-client and multi-server mobile applications

    NARCIS (Netherlands)

    Rocha, B.P.S.; Rezende, C.G.; Loureiro, A.A.F.

    2007-01-01

    With popularization of mobile computing, many developers have faced problems due to great heterogeneity of devices. To address this issue, we present in this work a middleware for multi-client and multi-server mobile applications. We assume that the middleware at the server side has no resource

  8. 2MASS Catalog Server Kit Version 2.1

    Science.gov (United States)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  9. FOLDNA, a Web Server for Self-Assembled DNA Nanostructure Autoscaffolds and Autostaples

    Directory of Open Access Journals (Sweden)

    Chensheng Zhou

    2012-01-01

    Full Text Available DNA self-assembly is a nanotechnology that folds DNA into desired shapes. Self-assembled DNA nanostructures, also known as origami, are increasingly valuable in nanomaterial and biosensing applications. Two ways to use DNA nanostructures in medicine are to form nanoarrays, and to work as vehicles in drug delivery. The DNA nanostructures perform well as a biomaterial in these areas because they have spatially addressable and size controllable properties. However, manually designing complementary DNA sequences for self-assembly is a technically demanding and time consuming task, which makes it advantageous for computers to do this job instead. We have developed a web server, FOLDNA, which can automatically design 2D self-assembled DNA nanostructures according to custom pictures and scaffold sequences provided by the users. It is the first web server to provide an entirely automatic design of self-assembled DNA nanostructure, and it takes merely a second to generate comprehensive information for molecular experiments including: scaffold DNA pathways, staple DNA directions, and staple DNA sequences. This program could save as much as several hours in the designing step for each DNA nanostructure. We randomly selected some shapes and corresponding outputs from our server and validated its performance in molecular experiments.

  10. The eDoc-Server Project Building an Institutional Repository for the Max Planck Society

    CERN Document Server

    Beier, Gerhard

    2004-01-01

    With the eDoc-Server the Heinz Nixdorf Center for Information Management in the Max Planck Society (ZIM) provides the research institutes of the Max Planck Society (MPS) with a platform to disseminate, store, and manage their scientific output. Moreover, eDoc serves as a tool to facilitate and promote open access to scientific information and primary sources. Since its introduction in October 2002 eDoc has gained high visibility within the MPS. It has been backed by strong institutional commitment to open access as documented in the 'Berlin Declaration on Open Access to the Data of the Sciences and Humanities', which was initiated by the MPS and found large support among major research organizations in Europe. This paper will outline the concept as well as the current status of the eDoc-Server, providing an example for the development and introduction of an institutional repository in a multi-disciplinary research organization.

  11. Client-Server Password Recovery

    NARCIS (Netherlands)

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  12. The Difference Between Using Proxy Server and VPN

    Directory of Open Access Journals (Sweden)

    David Dwiputra Kurniadi

    2015-11-01

    For example, looking for software, game through internet. But sometimes, there are some websites that cannot be opened as they have Internet Positive notificatio. To solve that problem, hacker found the solution by creating Proxy Server or VPN. In this time internet is very modern and very easy to access and there are a lot of Proxy Server and VPN that can be easly used.

  13. DoS attacks targeting SIP server and improvements of robustness

    OpenAIRE

    Vozňák, Miroslav; Šafařík, Jakub

    2012-01-01

    The paper describes the vulnerability of SIP servers to DoS attacks and methods for server protection. For each attack, this paper describes their impact on a SIP server, evaluation of the threat and the way in which they are executed. Attacks are described in detail, and a security precaution is made to prevent each of them. The proposed solution of the protection is based on a specific topology of an intrusion protection systems components consisting of a combination of...

  14. Optimal Configuration of Fault-Tolerance Parameters for Distributed Server Access

    DEFF Research Database (Denmark)

    Daidone, Alessandro; Renier, Thibault; Bondavalli, Andrea

    2013-01-01

    Server replication is a common fault-tolerance strategy to improve transaction dependability for services in communications networks. In distributed architectures, fault-diagnosis and recovery are implemented via the interaction of the server replicas with the clients and other entities...... model using stochastic activity networks (SAN) for the evaluation of performance and dependability metrics of a generic transaction-based service implemented on a distributed replication architecture. The composite SAN model can be easily adapted to a wide range of client-server applications deployed...

  15. An Array Library for Microsoft SQL Server with Astrophysical Applications

    Science.gov (United States)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory

  16. NCI's Distributed Geospatial Data Server

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Earth systems, environmental and geophysics datasets are an extremely valuable source of information about the state and evolution of the Earth. However, different disciplines and applications require this data to be post-processed in different ways before it can be used. For researchers experimenting with algorithms across large datasets or combining multiple data sets, the traditional approach to batch data processing and storing all the output for later analysis rapidly becomes unfeasible, and often requires additional work to publish for others to use. Recent developments on distributed computing using interactive access to significant cloud infrastructure opens the door for new ways of processing data on demand, hence alleviating the need for storage space for each individual copy of each product. The Australian National Computational Infrastructure (NCI) has developed a highly distributed geospatial data server which supports interactive processing of large geospatial data products, including satellite Earth Observation data and global model data, using flexible user-defined functions. This system dynamically and efficiently distributes the required computations among cloud nodes and thus provides a scalable analysis capability. In many cases this completely alleviates the need to preprocess and store the data as products. This system presents a standards-compliant interface, allowing ready accessibility for users of the data. Typical data wrangling problems such as handling different file formats and data types, or harmonising the coordinate projections or temporal and spatial resolutions, can now be handled automatically by this service. The geospatial data server exposes functionality for specifying how the data should be aggregated and transformed. The resulting products can be served using several standards such as the Open Geospatial Consortium's (OGC) Web Map Service (WMS) or Web Feature Service (WFS), Open Street Map tiles, or raw binary arrays under

  17. Windows® Small Business Server 2008 Administrator's Pocket Consultant

    CERN Document Server

    Zacker, Craig

    2009-01-01

    Portable and precise, this pocket-sized guide delivers ready answers for administering Windows Small Business Server 2008. Zero in on core support tasks and tools using quick-reference tables, instructions, and lists. You'll get the focused information you need to solve problems and get the job done-whether at your desk or in the field. Get fast facts to: Plan, install, and configure a small business network Navigate the Windows SBS Console toolCreate and administer user and group accounts Manage automatic updates, disk storage, and shared printersConfigure mail settings and customize inte

  18. Note on a tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    2000-01-01

    We consider a tandem queue with two stations. The first station is an $s$-server queue with Poisson arrivals and exponential service times. After terminating his service in the first station, a customer enters the second station to require service at a single server, while in the meantime he is

  19. A tandem queue with server slow-down and blocking

    NARCIS (Netherlands)

    van Foreest, N.D.; van Ommeren, Jan C.W.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  20. A tandem queue with server slow-down and blocking.

    NARCIS (Netherlands)

    van Foreest, N.; van Ommeren, J.C.; Mandjes, M.R.H.; Scheinhardt, W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  1. UC Irvine CHRS Real-time Global Satellite Precipitation Monitoring System (G-WADI PERSIANN-CCS GeoServer) for Hydrometeorological Applications

    Science.gov (United States)

    Sorooshian, S.; Hsu, K. L.; Gao, X.; Imam, B.; Nguyen, P.; Braithwaite, D.; Logan, W. S.; Mishra, A.

    2015-12-01

    The G-WADI Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) GeoServer has been successfully developed by the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California Irvine in collaboration with the UNESCO's International Hydrological Programme (IHP) and a number of its international centers. The system employs state-of-the-art technologies in remote sensing and artificial intelligence to estimate precipitation globally from satellite imagery in real-time and high spatiotemporal resolution (4km, hourly). It offers graphical tools and data service to help the user in emergency planning and management for natural disasters related to hydrological processes. The G-WADI PERSIANN-CCS GeoServer has been upgraded with new user-friendly functionalities. The precipitation data generated by the GeoServer is disseminated to the user community through support provided by ICIWaRM (The International Center for Integrated Water Resources Management), UNESCO and UC Irvine. Recently a number of new applications for mobile devices have been developed by our students. The RainMapper has been available on App Store and Google Play for the real-time PERSIANN-CCS observations. A global crowd sourced rainfall reporting system named iRain has also been developed to engage the public globally to provide qualitative information about real-time precipitation in their location which will be useful in improving the quality of the PERSIANN-CCS data. A number of recent examples of the application and use of the G-WADI PERSIANN-CCS GeoServer information will also be presented.

  2. Secure Server Login by Using Third Party and Chaotic System

    Science.gov (United States)

    Abdulatif, Firas A.; zuhiar, Maan

    2018-05-01

    Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.

  3. MultiSETTER: web server for multiple RNA structure comparison.

    Science.gov (United States)

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  4. Advancing the Power and Utility of Server-Side Aggregation

    Science.gov (United States)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  5. Use of activity logs to improve online collaboration

    Directory of Open Access Journals (Sweden)

    César Coll Salvador

    2018-02-01

    Full Text Available This article presents a review of works that center their interest in eLearning platforms and the data mining of participants’ activity. The studies in this research area generate information, through the analysis of such logs and data, that is provided to the students in real time to help them to collaborate and learn through collaboration on the platform. There are studies from different areas of study such as Learning Analytics, Educational Data Mining, Group Awareness Tools or Interaction Analysis Tools. The review takes a double perspective: i to analyze the data extracted from activity logs, their processing, the information generated and the ways to communicate it; and ii to explorer the model and the instruments used to assess how the information provided impact on online collaborative processes and/or the learning. The conclusions emphasize that the models of collaborative learning that justifies the selection of the data extracted from the activity logs, the processing, the information generated and provided to the students and the way of communicating it, are not explicitly stated. In addition, important biases are detected because of not considering the multidimensional nature of the collaborative learning processes. Also, few studies analyze the relations between students' uses of the information provided and the quality of their collaborative processes and learning results. The very few studies that do analyze such relation do not go into depth on the changes in group dynamics caused by information.

  6. Analysis of RIA standard curve by log-logistic and cubic log-logit models

    International Nuclear Information System (INIS)

    Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo

    1981-01-01

    In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)

  7. Application of computer mathematical modeling in nuclear well-logging industry

    International Nuclear Information System (INIS)

    Cai Shaohui

    1994-01-01

    Nuclear well logging techniques have made rapid progress since the first well log calibration facility (the API pits) was dedicated in 1959. Then came the first computer mathematical model in the late 70's. Mathematical modeling can now minimize design and experiment time, as well as provide new information and idea on tool design, environmental effects and result interpretation. The author gives a brief review on the achievements of mathematical modeling on nuclear logging problems

  8. Presentation and information management server PRIMAS

    International Nuclear Information System (INIS)

    Gibbert, R.

    1998-01-01

    An advanced computerized information system, PRIMAS, is presented. Its tasks include legal demands by the Federal government and the EU, environmental data model solutions, multi-sectorial analysis etc. Its structure, an open system architecture makes possible to be integrated and connected to existing standard systems. Its main use is the provision and processing of environmental information. (R.P.)

  9. Rancang Bangun Keanggotaan Perpustakaan STT Telematika Telkom Menggunakan RFID Berbasis Java 2 Standard Edition Dengan Konsep Client Server

    Directory of Open Access Journals (Sweden)

    Yana Yuniarsyah

    2013-05-01

    Full Text Available RFID technology is a new technology that hasn’t been widely applied. The existence of this technology can reduce the disadvantages of barcode technology. One application of RFID technology is used for a library card. STT Telematika Library is a library that uses a membership card to borrow and return transactions only. The existence of RFID technology in the card member can create a multifunctional card, in addition to borrow and return books transactions, membership cards can be used for visitor attendance too. Distribution of visitor attendance and report library using client-server concept, thus make it easier for librarians in data management. The programming language used in the design of Library Information System is a Java 2 Standard Edition (J2SE using NetBeans 7.0 as IDE. Storage Library using the MySQL database. Software design method using waterfall or linear sequential models. Model design to make information sistem using Unified Modeling Language (UML like usecase diagram, activity diagram, and class diagram. Database design model using Entity Relationship Diagram (ERD for development information library system. Testing library information system have form with testing user requirements, test the program using blacbox testing, and testing the user. RFID used for library information systems have form such as RFID reader which used to read the information carried by the RFID tag and RFID tag used to transmit information to the RFID reader. The success of the client-server concept comes from the success of visitor attendance and show a report from the client, and the success of server to store visitor attendance data.

  10. A Study on Partnering Mechanism in B to B EC Server for Global Supply Chain Management

    Science.gov (United States)

    Kaihara, Toshiya

    B to B Electronic Commerce (EC) technology is now in progress and regarded as an information infrastructure for global business. As the number and diversity of EC participants grows at the agile environment, the complexity of purchasing from a vast and dynamic array of goods and services needs to be hidden from the end user. Putting the complexity into the EC system instead means providing flexible auction server for enabling commerce within different business units. Market mechanism could solve the product distribution problem in the auction server by allocating the scheduled resources according to market prices. In this paper, we propose a partnering mechanism for B to B EC with market-oriented programming that mediates amongst unspecified various companies in the trade, and demonstrate the applicability of the economic analysis to this framework after constructing a primitive EC server. The proposed mechanism facilitates sophisticated B to B EC, which conducts a Pareto optimal solution for all the participating business units in the coming agile era.

  11. LHCb: Fabric Management with Diskless Servers and Quattor on LHCb

    CERN Multimedia

    Schweitzer, P; Brarda, L; Neufeld, N

    2011-01-01

    Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers: file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.

  12. Microsoft SQL Server 2012 Business Intelligence ja sen tuomat uudistukset

    OpenAIRE

    Luoma, Lauri

    2013-01-01

    Insinöörityö käsittelee Metropolia Ammattikorkeakoulun Microsoft Business Intelligence (BI) ratkaisut -kurssilla käytetyn laboratoriomanuaalin MCTS Self-Paced Training Kit (Exam 70-448) -harjoituksien siirtämistä SQL Serverille 2012. Työllä on tarkoitus osoittaa, että SQL Server 2012 BI -työkalut soveltuvat harjoituksiin. Työssä siirrytään käyttämään uudempaa työkalua, SQL Server Data Toolsia, joka korvaa SQL Server 2008 R2 Business Intelligence Development Studion. Työn alussa tutust...

  13. Energy Servers Deliver Clean, Affordable Power

    Science.gov (United States)

    2010-01-01

    K.R. Sridhar developed a fuel cell device for Ames Research Center, that could use solar power to split water into oxygen for breathing and hydrogen for fuel on Mars. Sridhar saw the potential of the technology, when reversed, to create clean energy on Earth. He founded Bloom Energy, of Sunnyvale, California, to advance the technology. Today, the Bloom Energy Server is providing cost-effective, environmentally friendly energy to a host of companies such as eBay, Google, and The Coca-Cola Company. Bloom's NASA-derived Energy Servers generate energy that is about 67-percent cleaner than a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels.

  14. RNAiFold: a web server for RNA inverse folding and molecular design.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  15. Server consolidation for heterogeneous computer clusters using Colored Petri Nets and CPN Tools

    Directory of Open Access Journals (Sweden)

    Issam Al-Azzoni

    2015-10-01

    Full Text Available In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs. Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

  16. ORCAN-a web-based meta-server for real-time detection and functional annotation of orthologs.

    Science.gov (United States)

    Zielezinski, Andrzej; Dziubek, Michal; Sliski, Jan; Karlowski, Wojciech M

    2017-04-15

    ORCAN (ORtholog sCANner) is a web-based meta-server for one-click evolutionary and functional annotation of protein sequences. The server combines information from the most popular orthology-prediction resources, including four tools and four online databases. Functional annotation utilizes five additional comparisons between the query and identified homologs, including: sequence similarity, protein domain architectures, functional motifs, Gene Ontology term assignments and a list of associated articles. Furthermore, the server uses a plurality-based rating system to evaluate the orthology relationships and to rank the reference proteins by their evolutionary and functional relevance to the query. Using a dataset of ∼1 million true yeast orthologs as a sample reference set, we show that combining multiple orthology-prediction tools in ORCAN increases the sensitivity and precision by 1-2 percent points. The service is available for free at http://www.combio.pl/orcan/ . wmk@amu.edu.pl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Instant migration from Windows Server 2008 and 2008 R2 to 2012 how-to

    CERN Document Server

    Sivarajan, Santhosh

    2013-01-01

    Presented in a hands-on reference manual style, with real-world scenarios to lead you through each process. This book is intended for Windows server administrators who are performing migrations from their existing Windows Server 2008 / 2008 R2 environment to Windows Server 2012. The reader must be familiar with Windows Server 2008.

  18. On the optimal use of a slow server in two-stage queueing systems

    Science.gov (United States)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  19. New portable neutron generator for well logging

    International Nuclear Information System (INIS)

    Chicanov, A.E.; Gromov, E. V.; Gulko, V. M.; Izmailov, A. V.

    1994-01-01

    The information about the design, investigation and testing of new well neutron generator for the pulse neutron logging (PNL) is given in this paper. The main physical characteristics of new PNL apparatus are: Neutron flux 2.10 sup 8 n/s ; Pulse frequency>=400 Hz; Diameter= 90 mm; Logging velocity >200 m/h; Number of probes = 2; Resource > 300 h. The generator were provided by gas-filled neutron accelerative tube named NTF-2. The perspective of application and optimization shown PNL apparatus are considered. (author)

  20. Distill: a suite of web servers for the prediction of one-, two- and three-dimensional structural features of proteins

    Directory of Open Access Journals (Sweden)

    Walsh Ian

    2006-09-01

    Full Text Available Abstract Background We describe Distill, a suite of servers for the prediction of protein structural features: secondary structure; relative solvent accessibility; contact density; backbone structural motifs; residue contact maps at 6, 8 and 12 Angstrom; coarse protein topology. The servers are based on large-scale ensembles of recursive neural networks and trained on large, up-to-date, non-redundant subsets of the Protein Data Bank. Together with structural feature predictions, Distill includes a server for prediction of Cα traces for short proteins (up to 200 amino acids. Results The servers are state-of-the-art, with secondary structure predicted correctly for nearly 80% of residues (currently the top performance on EVA, 2-class solvent accessibility nearly 80% correct, and contact maps exceeding 50% precision on the top non-diagonal contacts. A preliminary implementation of the predictor of protein Cα traces featured among the top 20 Novel Fold predictors at the last CASP6 experiment as group Distill (ID 0348. The majority of the servers, including the Cα trace predictor, now take into account homology information from the PDB, when available, resulting in greatly improved reliability. Conclusion All predictions are freely available through a simple joint web interface and the results are returned by email. In a single submission the user can send protein sequences for a total of up to 32k residues to all or a selection of the servers. Distill is accessible at the address: http://distill.ucd.ie/distill/.

  1. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  2. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  3. CERN servers go to Mexico

    CERN Multimedia

    Stefania Pandolfi

    2015-01-01

    On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico.   CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...

  4. The pdk-100 enhances interpretation capabilities for pulsed neutron capture logs

    International Nuclear Information System (INIS)

    Randall, R.R.; Oliver, D.W.; Ferti, W.H.

    1986-01-01

    The PDK-100 is a new pulsed neutron logging system designed to measure Sigma (Σ), the macroscopic thermal neutron capture cross section. In addition to determining Σ, the system provides logging curves which are a measure of formation porosity and which furnish information concerning borehole conditions. This paper reviews the principles of operation of the PDK-100, and presents examples which illustrate the utility of the logging system. In addition, the progress of investigations into new parameters which can be derived with pulsed neutron logging data will be reported

  5. Remote Laboratory Java Server Based on JACOB Project

    Directory of Open Access Journals (Sweden)

    Pavol Bisták

    2011-02-01

    Full Text Available Remote laboratories play an important role in the educational process of engineers. This paper deals with the structure of remote laboratories. The principle of the proposed remote laboratory structure is based on the Java server application that communicates with Matlab through the COM technology for the data exchange under the Windows operating system. Java does not support COM directly so the results of the JACOB project are used and modified to cope with this problem. In laboratories for control engineering education a control algorithm usually runs on a PC with Matlab that really controls the real plant. This is the server side described in the paper in details. To demonstrate the possibilities of a remote control a Java client server application is also introduced. It covers communication and offers a user friendly interface for the control of a remote plant and visualization of measured data.

  6. Professional Microsoft SQL Server 2012 Integration Services

    CERN Document Server

    Knight, Brian; Moss, Jessica M; Davis, Mike; Rock, Chris

    2012-01-01

    An in-depth look at the radical changes to the newest release of SISS Microsoft SQL Server 2012 Integration Services (SISS) builds on the revolutionary database product suite first introduced in 2005. With this crucial resource, you will explore how this newest release serves as a powerful tool for performing extraction, transformation, and load operations (ETL). A team of SQL Server experts deciphers this complex topic and provides detailed coverage of the new features of the 2012 product release. In addition to technical updates and additions, the authors present you with a new set of SISS b

  7. Improving consensus contact prediction via server correlation reduction.

    Science.gov (United States)

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  8. Improving consensus contact prediction via server correlation reduction

    Directory of Open Access Journals (Sweden)

    Xu Jinbo

    2009-05-01

    Full Text Available Abstract Background Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. Results In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Conclusion Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  9. Generalized localization for the double trigonometric Fourier series and the Walsh-Fourier series of functions in L log +L log + log +L

    International Nuclear Information System (INIS)

    Bloshanskaya, S K; Bloshanskii, I L; Roslova, T Y

    1998-01-01

    For an arbitrary open set Ω subset of I 2 =[0,1) 2 and an arbitrary function f element of L log + L log + log + L(I 2 ) such that f=0 on Ω the double Fourier series of f with respect to the trigonometric system Ψ=E and the Walsh-Paley system Ψ=W is shown to converge to zero (over rectangles) almost everywhere on Ω. Thus, it is proved that generalized localization almost everywhere holds on arbitrary open subsets of the square I 2 for the double trigonometric Fourier series and the Walsh-Fourier series of functions in the class L log + L log + log + L (in the case of summation over rectangles). It is also established that such localization breaks down on arbitrary sets that are not dense in I 2 , in the classes Φ Ψ (L)(I 2 ) for the orthonormal system Ψ=E and an arbitrary function such that Φ E (u)=o(u log + log + u) as u→∞ or for Φ W (u)=u( log + log + u) 1-ε , 0<ε<1

  10. Web server for the administrative and technical documentation of the radiodiagnostic facilities

    Energy Technology Data Exchange (ETDEWEB)

    Soto, M; Campayo, J. M; Guardia, V. [Logistica y Acondicionamientos Industriales SAU, Sorolla Center, Local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain); Mayo, P., E-mail: m.soto@lainsa.co [TITANIA Servicios Tecnologicos SL, Sorolla Center, Local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain)

    2010-10-15

    Nowadays Radiological Protection Technical Unit of LAINSA as part of Grupo Dominguis, has assigned radiological security tasks in a high number of medical X-ray facilities. It is recognised by the Nuclear Security Council as a specialist in the assessment of protection against the radiological risks associated with medical, industrial and nuclear activities. It is also authorised as an external personal dosimetry centre. Concretely medical X-ray facilities generate big amount of information because of national regulatory authority to assure the good functioning of it. This information is formed by administrative procedures for the regulatory authority in industrial and public health area, periodic quality controls of the radiographic equipment s, radiological verification in different locations to measure the radioactivity levels, certificates of employees training to work with radioactivity, dosimetric registrations of professional exposure employees and medical aptitude documents for their job, etc. In this paper it is presented a net server application to manage this information in an effective way by web. In this server each facility has an online net space with private key access and where there are all the administrative documents and nuclear security reports of the facility. Moreover, the client who is responsible of the radiological security of the centre, can have at any moment all this information, minimizing delay times and optimizing the information store support in electronic format. The objective is that this information can be updated for consulting, modifying or checking at anytime quickly and safety. All this information has to be accessible for the interested medical facility, for the Radiological Protection Technical Unit which has been contracted by the facility to do the assessment in radiological protection and for the regulatory authority in nuclear security to guarantee well-practice in medical and nuclear activities. (Author)

  11. MCSA Windows Server 2012 R2 installation and configuration study guide exam 70-410

    CERN Document Server

    Panek, William

    2015-01-01

    Master Windows Server installation and configuration withhands-on practice and interactive study aids for the MCSA: WindowsServer 2012 R2 exam 70-410 MCSA: Windows Server 2012 R2 Installation and ConfigurationStudy Guide: Exam 70-410 provides complete preparationfor exam 70-410: Installing and Configuring Windows Server 2012 R2.With comprehensive coverage of all exam topics and plenty ofhands-on practice, this self-paced guide is the ideal resource forthose preparing for the MCSA on Windows Server 2012 R2. Real-worldscenarios demonstrate how the lessons are applied in everydaysettings. Reader

  12. Mining process performance from event logs

    NARCIS (Netherlands)

    Adriansyah, A.; Buijs, J.C.A.M.; La Rosa, M.; Soffer, P.

    2013-01-01

    In systems where process executions are not strictly enforced by a predefined process model, obtaining reliable performance information is not trivial. In this paper, we analyzed an event log of a real-life process, taken from a Dutch financial institute, using process mining techniques. In

  13. Geophysical borehole logging test procedure: Final draft

    International Nuclear Information System (INIS)

    1986-09-01

    The purpose of geophysical borehole logging from the At-Depth Facility (ADF) is to provide information which will assist in characterizing the site geologic conditions and in classifying the engineering characteristics of the rock mass in the vicinity of the ADF. The direct goals of borehole logging include identification of lithologic units and their correlation from hole to hole, identification of fractured or otherwise porous or permeable zones, quantitative or semi-quantitative estimation of various formation properties, and evaluation of factors such as the borehole diameter and orientation. 11 figs., 4 tabs

  14. 3Drefine: an interactive web server for efficient protein structure refinement.

    Science.gov (United States)

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Borehole logging

    International Nuclear Information System (INIS)

    Olsen, H.

    1995-01-01

    Numerous ground water investigations have been accomplished by means of borehole logging. Borehole logging can be applied to establish new water recovery wells, to control the existing water producing wells and source areas and to estimate ground water quality. (EG)

  16. EarthServer: Cross-Disciplinary Earth Science Through Data Cube Analytics

    Science.gov (United States)

    Baumann, P.; Rossi, A. P.

    2016-12-01

    The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer [1] initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client [2] and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References: [1] Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106 [2] Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research

  17. Round-Trip Delay Estimation in OPC UA Server-Client Communication Channel

    OpenAIRE

    Nakutis, Zilvinas; Deksnys, Vytautas; Jarusevicius, Ignas; Dambrauskas, Vilius; Cincikas, Gediminas; Kriauceliunas, Alenas

    2017-01-01

    In this paper an estimation of round-trip delay (RTD) in OPC UA server-client channel was investigated in various data communication networks including Ethernet, WiFi, and 3G. Testing was carried out using the developed IoT gateway device running OPC UA server and remote computer running OPC UA client. The server and the client machines were configured to operate in Virtual Private Network powered by OpenVPN. Experimental analysis revealed that RTD values are distributed in the wide range exh...

  18. Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation

    Directory of Open Access Journals (Sweden)

    Blanton Marina

    2016-10-01

    Full Text Available Computation based on genomic data is becoming increasingly popular today, be it for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data. This is particularly true when the data is used in computation for what we call recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user’s data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one’s genomic data.

  19. Introduction of a backup system for data and servers of main IT infrastructure services

    International Nuclear Information System (INIS)

    Hirayama, Takashi; Kannari, Masaaki

    2013-06-01

    The optimization of the JAEA network system has been promoted in accordance with the optimization plan which has the fundamental principles of ensuring its dependability, information security and usability. In respect to ensuring the dependability, we addressed to a) the reduction of both trouble probability and recovery time, and b) an execution of the business continuity plan in time of large-scale earthquake. For the latter, we installed an e-mail backup server and an alternate connection to the internet in Kansai Photon Science Institute (Kizu-area) based on lesson learned from the experience of the Great East Japan Earthquake on March 11, 2011. In addition, we introduced a backup system for data and servers of other main IT infrastructure services. This report documents the configuration and operation of the backup system. (author)

  20. MCTBI: a web server for predicting metal ion effects in RNA structures.

    Science.gov (United States)

    Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie

    2017-08-01

    Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  1. Web Usage Mining, Pattern Discovery dan Log File

    OpenAIRE

    Tri Suratno; Toni Prahasto; Adian Fatchur Rochim

    2014-01-01

    Analysis  of  data  to  access  the  server  can  provide  significant  and  useful  information  for  performance  improvement,  restructuring  andimproving the effectiveness of a web site. Data mining is one of the most effective way to detect a series of patterns of information from large amounts of data. Application of  data mining  on  Internet use  called web  mining  is a set of  data mining  techniques  are  used  for the web. Web mining technologies and data mining is a combination o...

  2. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    Science.gov (United States)

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  3. Building server capabilities in China

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum

    2012-01-01

    The purpose of this paper is to further our understanding of multinational companies building server capabilities in China. The paper is based on the cases of two western companies with operations in China. The findings highlight a number of common patterns in the 1) managerial challenges related...

  4. Windows Server 2012 vulnerabilities and security

    Directory of Open Access Journals (Sweden)

    Gabriel R. López

    2015-09-01

    Full Text Available This investigation analyses the history of the vulnerabilities of the base system Windows Server 2012 highlighting the most critic vulnerabilities given every 4 months since its creation until the current date of the research. It was organized by the type of vulnerabilities based on the classification of the NIST. Next, given the official vulnerabilities of the system, the authors show how a critical vulnerability is treated by Microsoft in order to countermeasure the security flaw. Then, the authors present the recommended security approaches for Windows Server 2012, which focus on the baseline software given by Microsoft, update, patch and change management, hardening practices and the application of Active Directory Rights Management Services (AD RMS. AD RMS is considered as an important feature since it is able to protect the system even though it is compromised using access lists at a document level. Finally, the investigation of the state of the art related to the security of Windows Server 2012 shows an analysis of solutions given by third parties vendors, which offer security products to secure the base system objective of this study. The recommended solution given by the authors present the security vendor Symantec with its successful features and also characteristics that the authors considered that may have to be improved in future versions of the security solution.

  5. DIAS Project: The establishment of a European digital upper atmosphere server

    Science.gov (United States)

    Belehaki, A.; Cander, Lj.; Zolesi, B.; Bremer, J.; Juren, C.; Stanislawska, I.; Dialetis, D.; Hatzopoulos, M.

    2005-08-01

    The main objective of DIAS (European Digital Upper Atmosphere Server) project is to develop a pan-European digital data collection on the state of the upper atmosphere, based on real-time information and historical data collections provided by most operating ionospheric stations in Europe. A DIAS system will distribute information required by various groups of users for the specification of upper atmospheric conditions over Europe suitable for nowcasting and forecasting purposes. The successful operation of the DIAS system will lead to the development of new European added-value products and services, to the effective use of observational data in operational applications and consequently to the expansion of the relevant European market.

  6. PENGEMBANGAN ANTIVIRUS BERBASIS CLIENT SERVER

    Directory of Open Access Journals (Sweden)

    Richki Hardi

    2015-07-01

    Full Text Available The era of globalization is included era where the komputer virus has been growing rapidly, not only of mere academic research but has become a common problem for komputer users in the world. The effect of this loss is increasingly becoming the widespread use of the Internet as a global communication line between komputer users around the world, based on the results of the survey CSI / FB. Along with the progress, komputer viruses undergo some evolution in shape, characteristics and distribution medium such as Worms, Spyware Trojan horse and program Malcodelain. Through the development of server-based antivirus clien then the user can easily determine the behavior of viruses and worms, knowing what part of an operating system that is being attacked by viruses and worms, making itself a development of network-based antivirus client server and can also be relied upon as an engine fast and reliable scanner to recognize the virus and saving in memory management.

  7. CERN servers donated to Ghana

    CERN Multimedia

    CERN Bulletin

    2012-01-01

    Cutting-edge research requires a constantly high performance of the computing equipment. At the CERN Computing Centre, computers typically need to be replaced after about four years of use. However, while servers may be withdrawn from cutting-edge use, they are still good for other uses elsewhere. This week, 220 servers and 30 routers were donated to the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana.   “KNUST will provide a good home for these computers. The university has also developed a plan for using them to develop scientific collaboration with CERN,” said John Ellis, a professor at King’s College London and a visiting professor in CERN’s Theory Group.  John Ellis was heavily involved in building the relationship with Ghana, which started in 2006 when a Ghanaian participated in the CERN openlab student programme. Since 2007 CERN has hosted Ghanaians especially from KNUST in the framework of the CERN Summer Student Progr...

  8. Transaction aware tape-infrastructure monitoring

    International Nuclear Information System (INIS)

    Nikolaidis, Fotios; Kruse, Daniele Francesco

    2014-01-01

    Administrating a large scale, multi protocol, hierarchical tape infrastructure like the CERN Advanced STORage manager (CASTOR)[2], which stores now 100 PB (with an increasing step of 25 PB per year), requires an adequate monitoring system for quick spotting of malfunctions, easier debugging and on demand report generation. The main challenges for such system are: to cope with CASTOR's log format diversity and its information scattered among several log files, the need for long term information archival, the strict reliability requirements and the group based GUI visualization. For this purpose, we have designed, developed and deployed a centralized system consisting of four independent layers: the Log Transfer layer for collecting log lines from all tape servers to a single aggregation server, the Data Mining layer for combining log data into transaction context, the Storage layer for archiving the resulting transactions and finally the Web UI layer for accessing the information. Having flexibility, extensibility and maintainability in mind, each layer is designed to work as a message broker for the next layer, providing a clean and generic interface while ensuring consistency, redundancy and ultimately fault tolerance. This system unifies information previously dispersed over several monitoring tools into a single user interface, using Splunk, which also allows us to provide information visualization based on access control lists (ACL). Since its deployment, it has been successfully used by CASTOR tape operators for quick overview of transactions, performance evaluation, malfunction detection and from managers for report generation.

  9. PANNZER2: a rapid functional annotation web server.

    Science.gov (United States)

    Törönen, Petri; Medlar, Alan; Holm, Liisa

    2018-05-08

    The unprecedented growth of high-throughput sequencing has led to an ever-widening annotation gap in protein databases. While computational prediction methods are available to make up the shortfall, a majority of public web servers are hindered by practical limitations and poor performance. Here, we introduce PANNZER2 (Protein ANNotation with Z-scoRE), a fast functional annotation web server that provides both Gene Ontology (GO) annotations and free text description predictions. PANNZER2 uses SANSparallel to perform high-performance homology searches, making bulk annotation based on sequence similarity practical. PANNZER2 can output GO annotations from multiple scoring functions, enabling users to see which predictions are robust across predictors. Finally, PANNZER2 predictions scored within the top 10 methods for molecular function and biological process in the CAFA2 NK-full benchmark. The PANNZER2 web server is updated on a monthly schedule and is accessible at http://ekhidna2.biocenter.helsinki.fi/sanspanz/. The source code is available under the GNU Public Licence v3.

  10. Logging Concessions Enable Illegal Logging Crisis in the Peruvian Amazon

    Science.gov (United States)

    Finer, Matt; Jenkins, Clinton N.; Sky, Melissa A. Blue; Pine, Justin

    2014-04-01

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US-Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms.

  11. Logging concessions enable illegal logging crisis in the Peruvian Amazon.

    Science.gov (United States)

    Finer, Matt; Jenkins, Clinton N; Sky, Melissa A Blue; Pine, Justin

    2014-04-17

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US-Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms.

  12. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  13. Aespoe Hard Rock Laboratory. BIPS logging in borehole KAS09

    Energy Technology Data Exchange (ETDEWEB)

    Gustafsson, Jaana; Gustafsson, Christer (Malaa Geoscience AB (Sweden))

    2010-01-15

    This report includes the data gained in BIPS logging performed at the Aespoe Hard Rock Laboratory. The logging operation presented here includes BIPS logging in the core drilled borehole KAS09. The objective for the BIPS logging was to observe the condition of KAS09 in order to restore the borehole in the hydrogeological monitoring programme.All measurements were conducted by Malaa Geoscience AB on October 9th 2009. The objective of the BIPS logging is to achieve information of the borehole including occurrence of rock types as well as determination of fracture distribution and orientation. This report describes the equipment used as well as the measurement procedures and data gained. For the BIPS survey, the result is presented as images. The basic conditions of the BIPS logging for geological mapping and orientation of structures are satisfying for borehole KAS09, although induced affects from the drilling on the borehole walls limit the visibility

  14. Aespoe Hard Rock Laboratory. BIPS logging in borehole KAS09

    International Nuclear Information System (INIS)

    Gustafsson, Jaana; Gustafsson, Christer

    2010-01-01

    This report includes the data gained in BIPS logging performed at the Aespoe Hard Rock Laboratory. The logging operation presented here includes BIPS logging in the core drilled borehole KAS09. The objective for the BIPS logging was to observe the condition of KAS09 in order to restore the borehole in the hydrogeological monitoring programme.All measurements were conducted by Malaa Geoscience AB on October 9th 2009. The objective of the BIPS logging is to achieve information of the borehole including occurrence of rock types as well as determination of fracture distribution and orientation. This report describes the equipment used as well as the measurement procedures and data gained. For the BIPS survey, the result is presented as images. The basic conditions of the BIPS logging for geological mapping and orientation of structures are satisfying for borehole KAS09, although induced affects from the drilling on the borehole walls limit the visibility

  15. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration

    Science.gov (United States)

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-01

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503

  16. Clustering results - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Clustering results Data detail Data name Clustering results DOI 10.18908/lsdba...se Update History of This Database Site Policy | Contact Us Clustering results - Gclust Server | LSDB Archive ...

  17. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  18. Development of an Intelligent System to Synthesize Petrophysical Well Logs

    Directory of Open Access Journals (Sweden)

    Morteza Nouri Taleghani

    2013-07-01

    Full Text Available Porosity is one of the fundamental petrophysical properties that should be evaluated for hydrocarbon bearing reservoirs. It is a vital factor in precise understanding of reservoir quality in a hydrocarbon field. Log data are exceedingly crucial information in petroleum industries, for many of hydrocarbon parameters are obtained by virtue of petrophysical data. There are three main petrophysical logging tools for the determination of porosity, namely neutron, density, and sonic well logs. Porosity can be determined by the use of each of these tools; however, a precise analysis requires a complete set of these tools. Log sets are commonly either incomplete or unreliable for many reasons (i.e. incomplete logging, measurement errors, and loss of data owing to unsuitable data storage. To overcome this drawback, in this study several intelligent systems such as fuzzy logic (FL, neural network (NN, and support vector machine are used to predict synthesized petrophysical logs including neutron, density, and sonic. To accomplish this, the petrophysical well logs data were collected from a real reservoir in one of Iran southwest oil fields. The corresponding correlation was obtained through the comparison of synthesized log values with real log values. The results showed that all intelligent systems were capable of synthesizing petrophysical well logs, but SVM had better accuracy and could be used as the most reliable method compared to the other techniques.

  19. Towards Big Earth Data Analytics: The EarthServer Approach

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  20. Seq2Ref: a web server to facilitate functional interpretation

    Directory of Open Access Journals (Sweden)

    Li Wenlin

    2013-01-01

    Full Text Available Abstract Background The size of the protein sequence database has been exponentially increasing due to advances in genome sequencing. However, experimentally characterized proteins only constitute a small portion of the database, such that the majority of sequences have been annotated by computational approaches. Current automatic annotation pipelines inevitably introduce errors, making the annotations unreliable. Instead of such error-prone automatic annotations, functional interpretation should rely on annotations of ‘reference proteins’ that have been experimentally characterized or manually curated. Results The Seq2Ref server uses BLAST to detect proteins homologous to a query sequence and identifies the reference proteins among them. Seq2Ref then reports publications with experimental characterizations of the identified reference proteins that might be relevant to the query. Furthermore, a plurality-based rating system is developed to evaluate the homologous relationships and rank the reference proteins by their relevance to the query. Conclusions The reference proteins detected by our server will lend insight into proteins of unknown function and provide extensive information to develop in-depth understanding of uncharacterized proteins. Seq2Ref is available at: http://prodata.swmed.edu/seq2ref.

  1. A Process Mining Based Service Composition Approach for Mobile Information Systems

    Directory of Open Access Journals (Sweden)

    Chengxi Huang

    2017-01-01

    Full Text Available Due to the growing trend in applying big data and cloud computing technologies in information systems, it is becoming an important issue to handle the connection between large scale of data and the associated business processes in the Internet of Everything (IoE environment. Service composition as a widely used phase in system development has some limits when the complexity of relationship among data increases. Considering the expanding scale and the variety of devices in mobile information systems, a process mining based service composition approach is proposed in this paper in order to improve the adaptiveness and efficiency of compositions. Firstly, a preprocessing is conducted to extract existing service execution information from server-side logs. Then process mining algorithms are applied to discover the overall event sequence with preprocessed data. After that, a scene-based service composition is applied to aggregate scene information and relocate services of the system. Finally, a case study that applied the work in mobile medical application proves that the approach is practical and valuable in improving service composition adaptiveness and efficiency.

  2. Data pre-processing for web log mining: Case study of commercial bank website usage analysis

    Directory of Open Access Journals (Sweden)

    Jozef Kapusta

    2013-01-01

    Full Text Available We use data cleaning, integration, reduction and data conversion methods in the pre-processing level of data analysis. Data processing techniques improve the overall quality of the patterns mined. The paper describes using of standard pre-processing methods for preparing data of the commercial bank website in the form of the log file obtained from the web server. Data cleaning, as the simplest step of data pre-processing, is non–trivial as the analysed content is highly specific. We had to deal with the problem of frequent changes of the content and even frequent changes of the structure. Regular changes in the structure make use of the sitemap impossible. We presented approaches how to deal with this problem. We were able to create the sitemap dynamically just based on the content of the log file. In this case study, we also examined just the one part of the website over the standard analysis of an entire website, as we did not have access to all log files for the security reason. As the result, the traditional practices had to be adapted for this special case. Analysing just the small fraction of the website resulted in the short session time of regular visitors. We were not able to use recommended methods to determine the optimal value of session time. Therefore, we proposed new methods based on outliers identification for raising the accuracy of the session length in this paper.

  3. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    Science.gov (United States)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  4. Analysis of the Macroscopic Behavior of Server Systems in the Internet Environment

    Directory of Open Access Journals (Sweden)

    Yusuke Tanimura

    2017-11-01

    Full Text Available Elasticity is one of the key features of cloud-hosted services built on virtualization technology. To utilize the elasticity of cloud environments, administrators should accurately capture the operational status of server systems, which changes constantly according to service requests incoming irregularly. However, it is difficult to detect and avoid in advance that operating services are falling into an undesirable state. In this paper, we focus on the management of server systems that include cloud systems, and propose a new method for detecting the sign of undesirable scenarios before the system becomes overloaded as a result of various causes. In this method, a measure that utilizes the fluctuation of the macroscopic operational state observed in the server system is introduced. The proposed measure has the property of drastically increasing before the server system is in an undesirable state. Using the proposed measure, we realize a function to detect that the server system is falling into an overload scenario, and we demonstrate its effectiveness through experiments.

  5. The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan

    2005-01-01

    FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...

  6. Hydrophysical logging: A new wellbore technology for hydrogeologic and contaminant characterization of aquifers

    International Nuclear Information System (INIS)

    Pedler, W.H.; Williams, L.L.; Head, C.L.

    1992-01-01

    In the continuing search for improved groundwater characterization technologies, a new wellbore fluid logging method has recently been developed to provide accurate and cost effective hydrogeologic and contaminant characterization of bedrock aquifers. This new technique, termed hydrophysical logging, provides critical information for contaminated site characterization and water supply studies and, in addition, offers advantages compared to existing industry standards for aquifer characterization. Hydrophysical logging is based on measuring induced electrical conductivity changes in the fluid column of a wellbore by employing advanced downhole water quality instrumentation specifically developed for the dynamic borehole environment. Hydrophysical logging contemporaneously identifies the locations of water bearing intervals, the interval-specific inflow rate during pumping, and in-situ hydrochemistry of the formation waters associated with each producing interval. In addition, by employing a discrete point downhole fluid sampler during hydrophysical logging, this technique provides evaluation of contaminant concentrations and migration of contaminants vertically within the borehole. Recently, hydrophysical logging was applied in a deep bedrock wellbore at an industrial site in New Hampshire contaminated with dense nonaqueous phase liquids (DNAPLs). The results of the hydrophysical logging, conducted as part of a hydrogeologic site investigation and feasibility study, facilitated investigation of the site by providing information which indicated that the contamination had not penetrated into deeper bedrock fractures at concentrations of concern. This information was used to focus the pending Remedial Action Plan and to provide a more cost-effective remedial design

  7. Remote information service access system based on a client-server-service model

    Science.gov (United States)

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  8. Comparing speed of Web Map Service with GeoServer on ESRI Shapefile and PostGIS

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2016-07-01

    Full Text Available There are several options how to configure Web Map Service using severalmap servers. GeoServer is one of most popular map servers nowadays.GeoServer is able to read data from several sources. Very popular datasource is ESRI Shapefile. It is well documented and most of softwarefor geodata processing is able to read and write data in this format.Another very popular data store is PostgreSQL/PostGIS object-relationaldatabase. Both data sources has advantages and disadvantages and userof GeoServer has to decide which one to use. The paper describescomparison of performance of GeoServer Web Map Service when readingdata from ESRI Shapefile or from PostgreSQL/PostGIS database.

  9. Design heuristic for parallel many server systems under FCFS-ALIS

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Boon, M.; Weiss, G.

    2016-01-01

    We study a parallel service queueing system with servers of types $s_1,\\ldots,s_J$, customers of types $c_1,\\ldots,c_I$, bipartite compatibility graph $\\mathcal{G}$, where arc $(c_i, s_j)$ indicates that server type $s_j$ can serve customer type $c_i$, and service policy of first come first served

  10. How to Configurate Oracle Enterprise Manager on Windows 2000 Server

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Oracle Enterprise Manager is a system management tool, which provides an integrated solution for centrally managing your heterogeneous environment Servers. Enterprise Manager combines a graphical Console, Oracle Management Servers, Oracle Intelligent Agents, common services, and tools to provide an integrated, comprehensive systems management platform for managing Oracle products, and is comprised of such as Data

  11. Design and Delivery of Multiple Server-Side Computer Languages Course

    Science.gov (United States)

    Wang, Shouhong; Wang, Hai

    2011-01-01

    Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…

  12. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Science.gov (United States)

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  13. Instant SQL Server Analysis Services 2012 Cube Security

    CERN Document Server

    Jayanty, Satya SK

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant Microsoft SQL Server Analysis Services 2012 Cube Security is a practical, hands-on guide that provides a number of clear, step-by-step exercises for getting started with cube security.This book is aimed at Database Administrators, Data Architects, and Systems Administrators who are managing the SQL Server data platform. It is also beneficial for analysis services developers who already have some experience with the technology, but who want to go into more detail on advanced

  14. Lenovo acquires IBM's x86 low-end server business

    Directory of Open Access Journals (Sweden)

    Singh Pal Netra

    2015-01-01

    Full Text Available This paper presents an analysis of the key events, impacts and issues of Lenovo buying IBM's x86 low-end server business. The analysis include (i approval of the deal by regulatory bodies in the United States, Canada, India and China, (ii security concerns of US government departments, (iii pricing of the deals, (iv possible impact on IBM in future, and (v possibilities of Lenovo making it repeat of acquiring ThinkPad business of IBM. The paper presents analysis of qualitative and time series quantitative data. The qualitative data are mainly consists of different events before and after the acquisition of x86 server IBM business by Lenovo. The quantitative data are analyzed with respect to growth parameters of overall server business and Lenovo server business. Research paper also attempts to find out answer to specific 9 research questions with respect to impact on eco-systems of IBM and Lenovo. Based on analysis, it is inferred that IBM is not able to manage its traditional & well accepted products business in the face of fierce competition & low demand but Lenovo will manage. The deal was a financial necessity for IBM and strategic expansion in to new markets strategy for Lenovo.

  15. Defense strategies for cloud computing multi-site server infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; He, Fei [Texas A& M University, Kingsville, TX, USA

    2018-01-01

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, and also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.

  16. Optimal Service Capacities in a Competitive Multiple-Server Queueing Environment

    Science.gov (United States)

    Ching, Wai-Ki; Choi, Sin-Man; Huang, Min

    The study of economic behavior of service providers in a competition environment is an important and interesting research issue. A two-server queueing model has been proposed in Kalai et al. [11] for this purpose. Their model aims at studying the role and impact of service capacity in capturing larger market share so as to maximize the long-run expected profit. They formulate the problem as a two-person strategic game and analyze the equilibrium solutions. The main aim of this paper is to extend the results of the two-server queueing model in [11] to the case of multiple servers. We will only focus on the case when the queueing system is stable.

  17. AML (Advanced Mud Logging: First Among Equals

    Directory of Open Access Journals (Sweden)

    T. Loermans

    2017-09-01

    Full Text Available During the past ten years an enormous development in mud logging technology has been made. Traditional mud logging was only qualitative in nature, and mudlogs could not be used for the petrophysical well evaluations which form the basis for all subsequent activities on wells and fields. AML however can provide quantitative information, logs with a reliability, trueness and precision like LWD and WLL. Hence for well evaluation programmes there are now three different logging methods available, each with its own pros and cons on specific aspects: AML, LWD and WLL. The largest improvements have been made in mud gas analysis and elemental analysis of cuttings. Mud gas analysis can yield hydrocarbon fluid composition for some components with a quality like PVT analysis, hence not only revolutionising the sampling programme so far done with only LWD/WLL, but also making it possible to geosteer on fluid properties. Elemental analysis of cuttings, e.g. with XRF, with an ability well beyond the capabilities of the spectroscopy measurements possible earlier with LWD/WLL tools, is opening up improved ways to evaluate formations, especially of course where the traditional methods are falling short of requirements, such as in unconventional reservoirs. An overview and specific examples of these AML logs is given, from which it may be concluded that AML now ought to be considered as “first among its equals”.

  18. (m, M) Machining system with two unreliable servers, mixed spares and common-cause failure

    OpenAIRE

    Jain, Madhu; Mittal, Ragini; Kumari, Rekha

    2015-01-01

    This paper deals with multi-component machine repair model having provision of warm standby units and repair facility consisting of two heterogeneous servers (primary and secondary) to provide repair to the failed units. The failure of operating and standby units may occur individually or due to some common cause. The primary server may fail partially following full failure whereas secondary server faces complete failure only. The life times of servers and operating/standby units and their re...

  19. Pemetaan Subdomain Pada Cloud Server Universitas Semarang Menggunakan Metode Port Forwarding dan Reverse Proxy

    Directory of Open Access Journals (Sweden)

    Mohammad Sani Suprayogi

    2017-02-01

    Sehubungan dengan terbatasnya jumlah ip public yang dimiliki oleh setiap institusi, maka penelitian ini bertujuan untuk menghasilkan suatu konfigurasi pada server cloud dengan tujuan untuk mengoptimalkan ip private dalam jaringan, kemudian memetakan subdomain dan ip private pada setiap server supaya dapat diakses oleh pengunjung. Hasilnya Universitas Semarang cukup membutuhkan satu ip public yang berfungsi sebagai gateway terhadap server-server yang berjalan di jaringan cloud. Selain itu teknik ini dapat menjadi pengayaan dalam mata kuliah Jaringan Komputer.

  20. Prestigious nuclear research organization orders Silicom's cutting-edge server adapters

    CERN Multimedia

    2003-01-01

    "Silicom Ltd today announced that one of the world's largest and most prestigious nuclear research organization has placed an initial order for its Gigabit Ethernet Server Adapters. Silicom's high-performance adapters will be deployed in the organization's state-of-the-art particle physics laboratory servers to help them attain reliable gigabit transfer rates" (1/2 page).

  1. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    Science.gov (United States)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  2. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    International Nuclear Information System (INIS)

    Stepanov, Sergey

    2013-01-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  3. PONGO: a web server for multiple predictions of all-alpha transmembrane proteins

    DEFF Research Database (Denmark)

    Amico, M.; Finelli, M.; Rossi, I.

    2006-01-01

    of the organism and more importantly with the same sequence profile for a given sequence when required. Here we present a new web server that incorporates the state-of-the-art topology predictors in a single framework, so that putative users can interactively compare and evaluate four predictions simultaneously...... for a given sequence. Together with the predicted topology, the server also displays a signal peptide prediction determined with SPEP. The PONGO web server is available at http://pongo.biocomp.unibo.it/pongo .......The annotation efforts of the BIOSAPIENS European Network of Excellence have generated several distributed annotation systems (DAS) with the aim of integrating Bioinformatics resources and annotating metazoan genomes ( http://www.biosapiens.info/ ). In this context, the PONGO DAS server ( http...

  4. Development of a high-performance image server using ATM technology

    Science.gov (United States)

    Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.

    1996-05-01

    The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.

  5. Unconditionally Secure Protocols

    DEFF Research Database (Denmark)

    Meldgaard, Sigurd Torkel

    This thesis contains research on the theory of secure multi-party computation (MPC). Especially information theoretically (as opposed to computationally) secure protocols. It contains results from two main lines of work. One line on Information Theoretically Secure Oblivious RAMS, and how....... We construct an oblivious RAM that hides the client's access pattern with information theoretic security with an amortized $\\log^3 N$ query overhead. And how to employ a second server that is guaranteed not to conspire with the first to improve the overhead to $\\log^2 N$, while also avoiding...... they are used to speed up secure computation. An Oblivious RAM is a construction for a client with a small $O(1)$ internal memory to store $N$ pieces of data on a server while revealing nothing more than the size of the memory $N$, and the number of accesses. This specifically includes hiding the access pattern...

  6. A Graphical Client-Server Approach to Financial Management

    CERN Document Server

    Möller, M

    1994-01-01

    At the European Laboratory for Particle Physics (CERN), we have an annual budget of around 600 million US dollars. In order to manage this budget successfully, fast, accurate and easy information access is required throughout the management hierarchy. To meet these goals we have focused on the powerful combination of Relational Database Technology, Fourth Generation Tools and Client-Server architecture. Using these technologies we have developed a powerful and easy-to-use management information tool (known as the BHT) which allows the follow up and tracking of expenditure at all levels throughout the organization. Executives may instantaneously produce up-to-date graphics showing the expenditure profile of the organization. These graphics may then be used as a basis for ‘zooming in’ to view more and more details until the individual financial transactions are reached (all of which are on-line and available on the user’s desktop). The graphical user interface runs on both Macintosh and PC. Using ORACLE�...

  7. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography

    Science.gov (United States)

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.’s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.’s protocol and existing similar protocols. PMID:27163786

  8. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    Directory of Open Access Journals (Sweden)

    Alavalapati Goutham Reddy

    Full Text Available Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  9. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    Science.gov (United States)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  10. PostgreSQL server programming

    CERN Document Server

    Krosing, Hannu

    2013-01-01

    This practical guide leads you through numerous aspects of working with PostgreSQL. Step by step examples allow you to easily set up and extend PostgreSQL. ""PostgreSQL Server Programming"" is for moderate to advanced PostgreSQL database professionals. To get the best understanding of this book, you should have general experience in writing SQL, a basic idea of query tuning, and some coding experience in a language of your choice.

  11. TSKT-ORAM: A Two-Server k-ary Tree Oblivious RAM without Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Jinsheng Zhang

    2017-09-01

    Full Text Available This paper proposes TSKT-oblivious RAM (ORAM, an efficient multi-server ORAM construction, to protect a client’s access pattern to outsourced data. TSKT-ORAM organizes each of the server storages as a k-ary tree and adopts XOR-based private information retrieval (PIR and a novel delayed eviction technique to optimize both the data query and data eviction process. TSKT-ORAM is proven to protect the data access pattern privacy with a failure probability of 2 - 80 when system parameter k ≥ 128 . Meanwhile, given a constant-size local storage, when N (i.e., the total number of outsourced data blocks ranges from 2 16 – 2 34 , the communication cost of TSKT-ORAM is only 22–46 data blocks. Asymptotic analysis and practical comparisons are conducted to show that TSKT-ORAM incurs lower communication cost, storage cost and access delay in practical scenarios than the compared state-of-the-art ORAM schemes.

  12. Information resources assessment of a healthcare integrated delivery system.

    Science.gov (United States)

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  13. Information resources assessment of a healthcare integrated delivery system.

    Science.gov (United States)

    Gadd, C S; Friedman, C P; Douglas, G; Miller, D J

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations.

  14. SISTEM INFORMASI KEPENDUDUKAN BERBASIS CLIENT SERVER DI KELURAHAN BOBOSAN PURWOKERTO

    Directory of Open Access Journals (Sweden)

    Agustin Citra Dwicahya

    2010-02-01

    Full Text Available Sistem informasi kependudukan merupakan pendataan mengenai kependudukan di kelurahan. Permasalahan pada penelitian ini adalah pendataan kependudukan di kelurahan Bobosan masih kurang optimal dan kurangnya efisien waktu, dengan menggunakan teknologi sistem informasi maka kesalahan dalam pendataan kependudukan dapat diminimalisir. Penelitian ini bertujuan membuat dan merancang sistem informasi kependudukan dengan client server untuk membantu mengolah data infomasi yang akurat di kelurahan Bobosan.Tahapan penelitian ini dalam pengumpulan data menggunakan metode wawancara, metode observasi, metode studi pustaka dan metode dokumentasi. Pengembangan sistem menggunakan metode waterfall. Metode waterfallmerupakan metodelogi klasik yang digunakan untuk mengembangkan, memelihara dan menggunakan sistem informasi. Hasil penelitian ini berupa aplikasi sistem informasi kependudukan berbasis client server dengan menggunakan Visual Studio 2008 dan SQL Server 2008 yang dapat diakses melalui jaringan LAN (Local Area Network.

  15. Disclosure-Protected Inference with Linked Microdata Using a Remote Analysis Server

    Directory of Open Access Journals (Sweden)

    Chipperfield James O.

    2014-03-01

    Full Text Available Large amounts of microdata are collected by data custodians in the form of censuses and administrative records. Often, data custodians will collect different information on the same individual. Many important questions can be answered by linking microdata collected by different data custodians. For this reason, there is very strong demand from analysts, within government, business, and universities, for linked microdata. However, many data custodians are legally obliged to ensure the risk of disclosing information about a person or organisation is acceptably low. Different authors have considered the problem of how to facilitate reliable statistical inference from analysis of linked microdata while ensuring that the risk of disclosure is acceptably low. This article considers the problem from the perspective of an Integrating Authority that, by definition, is trusted to link the microdata and to facilitate analysts’ access to the linked microdata via a remote server, which allows analysts to fit models and view the statistical output without being able to observe the underlying linked microdata. One disclosure risk that must be managed by an Integrating Authority is that one data custodian may use the microdata it supplied to the Integrating Authority and statistical output released from the remote server to disclose information about a person or organisation that was supplied by the other data custodian. This article considers analysis of only binary variables. The utility and disclosure risk of the proposed method are investigated both in a simulation and using a real example. This article shows that some popular protections against disclosure (dropping records, rounding regression coefficients or imposing restrictions on model selection can be ineffective in the above setting.

  16. SharePoint Server 2010 Administration 24 Hour Trainer

    CERN Document Server

    Crider, Bill; Richardson, Clint

    2012-01-01

    Get quickly up to speed on SharePoint Server 2010! Covering all aspects of the SharePoint technology, this unique book-and-DVD combination provides expert guidance within each lesson in the book, which is then supplemented on the instructional DVD. The authors expose you to a variety of SharePoint Server 2010 topics, from organization concerns to training plans to programmer best practices, all aimed at helping you effortlessly find your way around SharePoint without a deep knowledge of the technology. You’ll quickly learn to configure and administer a site or site collection using this

  17. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration.

    Science.gov (United States)

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-04

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Can machine learning on learner analytics produce a predictive model on student performance?

    OpenAIRE

    Busch, John; Hanna, Philip; O'Neill, Ian; McGowan, Aidan; Collins, Matthew

    2017-01-01

    The aim of this research is to analysis past student learner analytics using machine learning algorithms that had undertaken a web development and programming module. By specifically using the access and error web server logs from each student web server it provides a deeper learner analytic data. The web server logs every web file access and error access from a browser so in turn each data file can directly relate to a student's engagement level and assessment strategy. Each log holds severa...

  19. Pinpointing water entries using pulsed neutron and Production logging tools

    International Nuclear Information System (INIS)

    Mukerji, P.; Oluwa, J.

    2003-01-01

    A successful work over requires a comprehensive understanding of fluid entries into the wellbore and fluid contact movement in the reservoir. Such information can be obtained by a combination of production logs and saturation-monitoring measurements. The ability to combine pulsed neutron and production logging tools provides the operator with better diagnostics for identifying candidates for remedial actions and greatly increases the possibility of a successful well intervention. Advances in pulsed neutron spectroscopy tools have improved the accuracy and precision of measured carbon/oxygen ratios. Some of the improvements in accuracy and precision have resulted from better tool characterization in a wider variety of logging environments in the calibration facility and new spectral standards. Coincident with the advances in pulsed neutron spectroscopy has been the development of production logging measurements run on a platform common. We will show how the application of pulsed neutron and production logs can optimize subsequent well intervention to reduce water production and/or increase oil production

  20. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  1. Genonets server-a web server for the construction, analysis and visualization of genotype networks.

    Science.gov (United States)

    Khalid, Fahad; Aguilar-Rodríguez, José; Wagner, Andreas; Payne, Joshua L

    2016-07-08

    A genotype network is a graph in which vertices represent genotypes that have the same phenotype. Edges connect vertices if their corresponding genotypes differ in a single small mutation. Genotype networks are used to study the organization of genotype spaces. They have shed light on the relationship between robustness and evolvability in biological systems as different as RNA macromolecules and transcriptional regulatory circuits. Despite the importance of genotype networks, no tool exists for their automatic construction, analysis and visualization. Here we fill this gap by presenting the Genonets Server, a tool that provides the following features: (i) the construction of genotype networks for categorical and univariate phenotypes from DNA, RNA, amino acid or binary sequences; (ii) analyses of genotype network topology and how it relates to robustness and evolvability, as well as analyses of genotype network topography and how it relates to the navigability of a genotype network via mutation and natural selection; (iii) multiple interactive visualizations that facilitate exploratory research and education. The Genonets Server is freely available at http://ieu-genonets.uzh.ch. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach

    Science.gov (United States)

    Baumann, P.

    2012-04-01

    Complex analytics on "big data" is one of the core challenges of current Earth science, generating strong requirements for on-demand processing and fil tering of massive data sets. Issues under discussion include flexibility, performance, scalability, and the heterogeneity of the information types invo lved. In other domains, high-level query languages (such as those offered by database systems) have proven successful in the quest for flexible, scalable data access interfaces to massive amounts of data. However, due to the lack of support for many of the Earth science data structures, database systems are only used for registries and catalogs, but not for the bulk of spatio-temporal data. One core information category in this field is given by coverage data. ISO 19123 defines coverages, simplifying, as a representation of a "space-time varying phenomenon". This model can express a large class of Earth science data structures, including rectified and non-rectified rasters, curvilinear grids, point clouds, TINs, general meshes, trajectories, surfaces, and solids. This abstract definition, which is too high-level to establish interoperability, is concretized by the OGC GML 3.2.1 Application Schema for Coverages Standard into an interoperable representation. The OGC Web Coverage Processing Service (WCPS) Standard defines a declarative query language on multi-dimensional raster-type coverages, such as 1D in-situ sensor timeseries, 2D EO imagery, 3D x/y/t image time series and x/y/z geophysical data, 4D x/y/z/t climate and ocean data. Hence, important ingredients for versatile coverage retrieval are given - however, this potential has not been fully unleashed by service architectures up to now. The EU FP7-INFRA project EarthServer, launched in September 2011, aims at enabling standards-based on-demand analytics over the Web for Earth science data based on an integration of W3C XQuery for alphanumeric data and OGC-WCPS for raster data. Ultimately, EarthServer will support

  3. Study of the reservoirs of Jurassic and Cretaceous periods in the south-cast slope of Central Kara-Kum vault using combination of acoustic logging, neutron-gamma logging, gamma logging, and electrical logging

    International Nuclear Information System (INIS)

    Meredov, T.M.; Baranov, M.I.

    1978-01-01

    Considered is the possibility of application of the combination of neutron-gamma logging, gamma logging al partitioncoustic logging and electrical logging to lithologica of sections, discovery reservoir layers in carbonate and terrigeneous sections as well as quantitative estimation of the porosity coefficients values at prospecting areas in the south-east slope of Central Kara-Kum vault. Neutron-gamma logging mostly makes it possible to partition carbonate rocks into limestones, dolomites and their interstitial variaties and to indicate sand stone layers with different degree of carbonate content

  4. Fracture diagnostics with tube wave reflection logs

    International Nuclear Information System (INIS)

    Medlin, W.L.

    1991-01-01

    This paper reports on the Tube Wave Reflection Log (TWRL) which is acoustic logging method which provides information about the height, location and conductivity of hydraulically induced fractures behind perforated casing. The TWRL tool consists of a transmitter and closely spaced receiver. The transmitter is driven with a short, low frequency tone burst to generate long wavelength tube waves which are little attenuated in unperforated casing. They are partially reflected when they pass perforated intervals communicating with a hydraulically induced fracture. The tool listens for such reflections for 0.1 seconds following each excitation burst. As the tool is moved uphole at logging speed, the transmitter is excited at each foot of depth. VDL displays of the TWRL records provide reflection traces whose projections define the uppermost and lower-most perforations communicating with the fracture. The strength of the reflections depends on the ease of fluid flow into the fracture and thus, is an indicator of fracture conductivity

  5. iELM—a web server to explore short linear motif-mediated interactions

    Science.gov (United States)

    Weatheritt, Robert J.; Jehl, Peter; Dinkel, Holger; Gibson, Toby J.

    2012-01-01

    The recent expansion in our knowledge of protein–protein interactions (PPIs) has allowed the annotation and prediction of hundreds of thousands of interactions. However, the function of many of these interactions remains elusive. The interactions of Eukaryotic Linear Motif (iELM) web server provides a resource for predicting the function and positional interface for a subset of interactions mediated by short linear motifs (SLiMs). The iELM prediction algorithm is based on the annotated SLiM classes from the Eukaryotic Linear Motif (ELM) resource and allows users to explore both annotated and user-generated PPI networks for SLiM-mediated interactions. By incorporating the annotated information from the ELM resource, iELM provides functional details of PPIs. This can be used in proteomic analysis, for example, to infer whether an interaction promotes complex formation or degradation. Furthermore, details of the molecular interface of the SLiM-mediated interactions are also predicted. This information is displayed in a fully searchable table, as well as graphically with the modular architecture of the participating proteins extracted from the UniProt and Phospho.ELM resources. A network figure is also presented to aid the interpretation of results. The iELM server supports single protein queries as well as large-scale proteomic submissions and is freely available at http://i.elm.eu.org. PMID:22638578

  6. BIPS: BIANA Interolog Prediction Server. A tool for protein-protein interaction inference.

    Science.gov (United States)

    Garcia-Garcia, Javier; Schleker, Sylvia; Klein-Seetharaman, Judith; Oliva, Baldo

    2012-07-01

    Protein-protein interactions (PPIs) play a crucial role in biology, and high-throughput experiments have greatly increased the coverage of known interactions. Still, identification of complete inter- and intraspecies interactomes is far from being complete. Experimental data can be complemented by the prediction of PPIs within an organism or between two organisms based on the known interactions of the orthologous genes of other organisms (interologs). Here, we present the BIANA (Biologic Interactions and Network Analysis) Interolog Prediction Server (BIPS), which offers a web-based interface to facilitate PPI predictions based on interolog information. BIPS benefits from the capabilities of the framework BIANA to integrate the several PPI-related databases. Additional metadata can be used to improve the reliability of the predicted interactions. Sensitivity and specificity of the server have been calculated using known PPIs from different interactomes using a leave-one-out approach. The specificity is between 72 and 98%, whereas sensitivity varies between 1 and 59%, depending on the sequence identity cut-off used to calculate similarities between sequences. BIPS is freely accessible at http://sbi.imim.es/BIPS.php.

  7. Análisis e implementación para la creación de un clúster SQL server como solución centralizada de bases de datos satélites

    OpenAIRE

    Estrella Izurieta, Johan Alexei

    2016-01-01

    The present project analyzes a viable and sustainable solution for companies that need to improve the availability, scalability, and security of their information by centralizing in a cluster a SQL Server database in SQL Server satellites. This will allow to decrease administrative time, increase high ability to information, and improve performance. Consequently, it will release hardware resources that could be reused on current requirements or in future projects, trying in thi...

  8. The Meaning of Logs

    NARCIS (Netherlands)

    Etalle, Sandro; Massacci, Fabio; Yautsiukhin, Artsiom

    2007-01-01

    While logging events is becoming increasingly common in computing, in communication and in collaborative work, log systems need to satisfy increasingly challenging (if not conflicting) requirements.Despite the growing pervasiveness of log systems, to date there is no high-level framework which

  9. The Meaning of Logs

    NARCIS (Netherlands)

    Etalle, Sandro; Massacci, Fabio; Yautsiukhin, Artsiom; Lambrinoudakis, Costas; Pernul, Günther; Tjoa, A Min

    While logging events is becoming increasingly common in computing, in communication and in collaborative environments, log systems need to satisfy increasingly challenging (if not conflicting) requirements. In this paper we propose a high-level framework for modeling log systems, and reasoning about

  10. Teaching an Old Log New Tricks with Machine Learning.

    Science.gov (United States)

    Schnell, Krista; Puri, Colin; Mahler, Paul; Dukatz, Carl

    2014-03-01

    To most people, the log file would not be considered an exciting area in technology today. However, these relatively benign, slowly growing data sources can drive large business transformations when combined with modern-day analytics. Accenture Technology Labs has built a new framework that helps to expand existing vendor solutions to create new methods of gaining insights from these benevolent information springs. This framework provides a systematic and effective machine-learning mechanism to understand, analyze, and visualize heterogeneous log files. These techniques enable an automated approach to analyzing log content in real time, learning relevant behaviors, and creating actionable insights applicable in traditionally reactive situations. Using this approach, companies can now tap into a wealth of knowledge residing in log file data that is currently being collected but underutilized because of its overwhelming variety and volume. By using log files as an important data input into the larger enterprise data supply chain, businesses have the opportunity to enhance their current operational log management solution and generate entirely new business insights-no longer limited to the realm of reactive IT management, but extending from proactive product improvement to defense from attacks. As we will discuss, this solution has immediate relevance in the telecommunications and security industries. However, the most forward-looking companies can take it even further. How? By thinking beyond the log file and applying the same machine-learning framework to other log file use cases (including logistics, social media, and consumer behavior) and any other transactional data source.

  11. The optimal control in batch arrival queue with server vacations, startup and breakdowns

    Directory of Open Access Journals (Sweden)

    Ke Jau-Chuan

    2004-01-01

    Full Text Available This paper studies the N policy M[x]/G/1 queue with server vacations; startup and breakdowns, where arrivals form a compound Poisson process and service times are generally distributed. The server is turned off and takes a vacation whenever the system is empty. If the number of customers waiting in the system at the instant of a vacation completion is less than N, the server will take another vacation. If the server returns from a vacation and finds at least N customers in the system, he is immediately turned on and requires a startup time before providing the service until the system is empty again. It is assumed that the server breaks down according to a Poisson process whose repair time has a general distribution. The system characteristics of such a model are analyzed and the total expected cost function per unit time is developed to determine the optimal threshold of N at a minimum cost.

  12. 75 FR 8400 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Science.gov (United States)

    2010-02-24

    ... Communications System Server Software, Wireless Handheld Devices and Battery Packs; Notice of Investigation... within the United States after importation of certain wireless communications system server software... certain wireless communications system server software, wireless handheld devices or battery packs that...

  13. Supporting the IEE-EU project 'Development of the market for energy-efficient servers'; Unterstuetzung des IEE-EU-Projekts 'Development of the market for energy efficient servers'

    Energy Technology Data Exchange (ETDEWEB)

    Huser, A.

    2009-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at work done within the framework of the European Union's project that aims to demonstrate the considerable potential for energy saving and cost reductions for IT servers in practice, and to support the market development for energy efficient servers. Guidelines for the procurement and management of energy efficient servers and server infrastructure that provide detailed recommendations for practical use are described. A two-page leaflet is reviewed that has been specially drawn up for the managing directors and IT managers of small and medium-sized companies. The most important recommendations for improved energy efficiency are reviewed and commented on. Optimisation measures are reviewed and energy-savings to be made are quoted.

  14. Destination Serbia: a new life for CERN’s servers

    CERN Multimedia

    Caroline Duc

    2012-01-01

    In order to ensure the computing performances that CERN's research needs, the Computer Centre has to replace its computers regularly. After Morocco, Ghana and Bulgaria, it's Serbia’s turn to receive a donation of servers from CERN!   CERN Director-General Rolf Heuer and Jovan Puzovic from Belgrade Institute of Physics seeing off the servers on the beginning of their journey to Serbia. On Monday 26 November, CERN donated 130 servers to two Serbian institutions: the Belgrade Institute of Physics and the Petnica Science School. In 2012, 559 computers were donated to institutions in Africa and Europe. Since the mid-2000s, the Computer Centre has changed technology and now have about 10,000 computers that have to be renewed every four to five years. Obsolete for the purposes of CERN's cutting-edge research, these computers are still suitable for less demanding applications. Jovan Puzovic, Belgrade Institute of Physics team leader for the NA61 experiment (SHINE), an...

  15. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    Science.gov (United States)

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  16. The application of radiation logs to groundwater hydrology

    Energy Technology Data Exchange (ETDEWEB)

    Scott Keys, W [United States Geological Survey, Denver, CO (United States)

    1967-05-15

    The drilling of exploratory holes to determine the availability of groundwater and to plan the most economical methods of water development is expensive. The only technique available at present for obtaining geological and hydrological information through the casing of pre-existing water wells and other boreholes is by radiation logging. Up to now these logging techniques have been little used in groundwater hydrology. This report describes inexpensive portable radiation logging equipment that is available or has been developed for groundwater studies in connection with a general research project on the application of borehole geophysics in groundwater hydrology. It is possible to obtain data on the following: the source, velocity, and chemical quality of groundwater; the location, extent, geometry, bulk density, porosity, permeability, and specific yield of aquifers and associated strata; and the position of casings, casing collars, leaks, perforations, and cement. The radiation logs employed include natural gamma, gamma-gamma, neutron-gamma. neutron epithermal-neutron. and radioactive tracer. The following radioisotopes are utilized: cobalt-60, plutonium-239, americium-241, and iodine-131. Typical radiation logs obtained by the various techniques are described and examples are given of practical applications of radiation logging to groundwater investigations. The applications cited are studies of perched water in basaltic rocks and associated sedimentary strata; the porosity, moisture content, and position of zones into which water was injected in volcanic tuff; the position of the interface between brine and fresh water in fine-grained carbonate rocks and associated fine clastic rocks; the interpretation of porosity from a neutron log; and the location by means of a radioactive tracer of the more permeable fracture zones in a well penetrating crystalline rock. (author)

  17. Characterization of reservoir fractures using conventional geophysical logging

    Directory of Open Access Journals (Sweden)

    Paitoon Laongsakul

    2011-04-01

    Full Text Available In hydrocarbon exploration fractures play an important role as possible pathways for the hydrocarbon flow and bythis enhancing the overall formation’s permeability. Advanced logging methods for fracture analysis, like the boreholeacoustic televiewer and Formation Microscanner (FMS are available, but these are additional and expensive tools. However,open and with water or hydrocarbon filled fractures are also sensitive to electrical and other conventional logging methods.For this study conventional logging data (electric, seismic, etc were available plus additional fracture information from FMS.Taking into account the borehole environment the results show that the micro-spherically focused log indicates fractures byshowing low resistivity spikes opposite open fractures, and high resistivity spikes opposite sealed ones. Compressional andshear wave velocities are reduced when passing trough the fracture zone, which are assumed to be more or less perpendicularto borehole axis. The photoelectric absorption curve exhibit a very sharp peak in front of a fracture filled with bariteloaded mud cake. The density log shows low density spikes that are not seen by the neutron log, usually where fractures,large vugs, or caverns exist. Borehole breakouts can cause a similar effect on the logging response than fractures, but fracturesare often present when this occurs. The fracture index calculation by using threshold and input weight was calculatedand there was in general a good agreement with the fracture data from FMS especially in fracture zones, which mainlycontribute to the hydraulic system of the reservoir. Finally, the overall results from this study using one well are promising,however further research in the combination of different tools for fracture identification is recommended as well as the useof core for further validation.

  18. Microsoft Exchange Server PowerShell cookbook

    CERN Document Server

    Andersson, Jonas

    2015-01-01

    This book is for messaging professionals who want to build real-world scripts with Windows PowerShell 5 and the Exchange Management Shell. If you are a network or systems administrator responsible for managing and maintaining Exchange Server 2013, you will find this highly useful.

  19. CovalentDock Cloud: a web server for automated covalent docking.

    Science.gov (United States)

    Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong

    2013-07-01

    Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/.

  20. Kelayakan Raspberry Pi sebagai Web Server: Perbandingan Kinerja Nginx, Apache, dan Lighttpd pada Platform Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Rahmad Dawood

    2014-04-01

    Full Text Available Raspberry Pi is a small-sized computer, but it can function like an ordinary computer. Because it can function like a regular PC then it is also possible to run a web server application on the Raspberry Pi. This paper will report results from testing the feasibility and performance of running a web server on the Raspberry Pi. The test was conducted on the current top three most popular web servers, which are: Apache, Nginx, and Lighttpd. The parameters used to evaluate the feasibility and performance of these web servers were: maximum request and reply time. The results from the test showed that it is feasible to run all three web servers on the Raspberry Pi but Nginx gave the best performance followed by Lighttpd and Apache.Keywords: Raspberry Pi, web server, Apache, Lighttpd, Nginx, web server performance