WorldWideScience

Sample records for client server computing

  1. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  2. A Client-Server Architecture for an Instructional Environment Based on Computer Networks and the Internet.

    Science.gov (United States)

    Guidon, Jacques; Pierre, Samuel

    1996-01-01

    Discusses the use of computers in education and training and proposes a client-server architecture for an experimental computer environment as an approach to a virtual classroom. Highlights include the World Wide Web and client software, document delivery, hardware architecture, and Internet resources and services. (Author/LRW)

  3. Implementing a Physician's Workstation using client/server technology and the distributed computing environment.

    Science.gov (United States)

    Pham, T Q; Young, C Y; Tang, P C; Suermondt, H J; Annevelink, J

    1994-01-01

    PWS is a physician's workstation research prototype developed to explore the use of information management tools by physicians in the context of patient care. The original prototype was implemented in a client/server architecture using a broadcast message server. As we expanded the scope of the prototyping activities, we identified the limitations of the broadcast message server in the areas of scalability, security, and interoperability. To address these issues, we reimplemented PWS using the Open Software Foundation's Distributed Computing Environment (DCE). We describe the rationale for using DCE, the migration process, and the benefits achieved. Future work and recommendations are discussed.

  4. Client-server framework for securely outsourcing computations

    NARCIS (Netherlands)

    Veugen, P.J.M.

    2016-01-01

    In the current age of information, with growing internet connectivity, people are looking for service providers to store their data, and compute with it. On the other hand, sensitive personal data is easily misused for unintended purposes. Wouldn’t it be great to have a scalable framework, where mul

  5. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  6. Client-server password recovery

    NARCIS (Netherlands)

    Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the pass

  7. Client-Server Password Recovery

    NARCIS (Netherlands)

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the pass

  8. Client/server approach to image capturing

    Science.gov (United States)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  9. TJ-II data retrieving by means of a client/server model

    Science.gov (United States)

    Vega, J.; Sánchez, E.; Crémy, C.; Portas, A.; Dulya, C. M.; Nilsson, J.

    1999-01-01

    The database of the TJ-II flexible heliac is centralized in a Unix server. This computer also commands the on-line processes related to data acquisition during TJ-II discharges: programming of measurement systems, connectivity with control systems, data visualization, and computations. The server has to provide access to the data so that signal analysis can be performed by local users or even from remote hosts. Data retrieving is accomplished by means of a client/server architecture in which two data servers are permanently running in the background of the Unix computer. One of them serves data requests from local clients and the other one sends data to remote clients. The communication protocol in both cases has been developed by using TCP/IP and Berkeley sockets. The client part consists of a set of routines (FORTRAN and C callable), which, in a transparent way, provide connectivity with the servers. This structure allows access to TJ-II data exactly in the same way from any computer, hiding not only specific aspects of the database, but hardware architecture of the server computer as well. In addition, the remote access makes it possible to distribute computations and to reduce the load on the Unix server from analysis and visualization tasks. At present, this software is running in four different environments: the Unix server itself, various types of Unix workstations, a CRAY J90 and a CRAY T3E. Finally, due to the fact that visualization is essential for TJ-II data analysis, a powerful and a very flexible visualization tool has been developed. It is a point and click application based on X Window/Motif. Data access is carried out through the client/server processes mentioned above and the software runs in the client computer.

  10. CAG - computer-aid-georeferencing, or rapid sharing, restructuring and presentation of environmental data using remote-server georeferencing for the GE clients. Educational and scientific implications.

    Science.gov (United States)

    Hronusov, V. V.

    2006-12-01

    We suggest a method of using external public servers for rearranging, restructuring and rapid sharing of environmental data for the purpose of quick presentations in numerous GE clients. The method allows to add new philosophy for the presentation (publication) of the data (mostly static) stored in the public domain (e.g., Blue Marble, Visible Earth, etc). - The new approach is generated by publishing freely accessible spreadsheets which contain enough information and links to the data. Due to the fact that most of the large depositories of the data on the environmental monitoring have rather simple net address system as well as simple hierarchy mostly based on the date and type of the data, it is possible to develop the http-based link to the file which contains the data. Publication of new data on the server is recorded by a simple entering a new address into a cell in the spreadsheet. At the moment we use the EditGrid (www.editgrid.com) system as a spreadsheet platform. The generation of kml-codes is achieved on the basis of XML data and XSLT procedures. Since the EditGride environment supports "fetch" and similar commands, it is possible to create"smart-adaptive" KML generation on the fly based on the data streams from RSS and XML sources. The previous GIS-based methods could combine hi-definition data combined from various sources, but large- scale comparisons of dynamic processes have been usually out of reach of the technology. The suggested method allows unlimited number of GE clients to view, review and compare dynamic and static process of previously un-combinable sources, and on unprecedent scales. The ease of automated or computer-assisted georeferencing has already led to translation about 3000 raster public domain imagery, point and linear data sources into GE-language. In addition the suggested method allows a user to create rapid animations to demonstrate dynamic processes; roducts of high demand in education, meteorology, volcanology and

  11. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    Science.gov (United States)

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  12. Client-Server Connection Status Monitoring Using Ajax Push Technology

    Science.gov (United States)

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  13. From P2P to Web services and grids peers in a client/server world

    CERN Document Server

    Taylor, Ian J

    2005-01-01

    Provides an overview of peer-to-peer (P2P) technologies that have revolutionized the way we think about distributed computing and the internet. This book compares these technologies to alternative solutions, most notably web services and Grid computing but also other technologies, such as client/server based systems and agent technologies.

  14. Server-Aided Two-Party Computation with Simultaneous Corruption

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Damgård, Ivan Bjerre; Ranellucci, Samuel

    We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal composab......We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal...

  15. Telematics-based online client-server/client collaborative environment for radiotherapy planning simulations.

    Science.gov (United States)

    Kum, Oyeon

    2007-11-01

    Customized cancer radiation treatment planning for each patient is very useful for both a patient and a doctor because it provides the ability to deliver higher doses to a more accurately defined tumor and at the same time lower doses to organs at risk and normal tissues. This can be realized by building an accurate planning simulation system to provide better treatment strategies based on each patient's tomographic data such as CT, MRI, PET, or SPECT. In this study, we develop a real-time online client-server/client collaborative environment between the client (health care professionals or hospitals) and the server/client under a secure network using telematics (the integrated use of telecommunications and medical informatics). The implementation is based on a point-to-point communication scheme between client and server/client following the WYSIWIS (what you see is what I see) paradigm. After uploading the patient tomographic data, the client is able to collaborate with the server/client for treatment planning. Consequently, the level of health care services can be improved, specifically for small radiotherapy clinics in rural/remote-country areas that do not possess much experience or equipment such as a treatment planning simulator. The telematics service of the system can also be used to provide continued medical education in radiotherapy. Moreover, the system is easy to use. A client can use the system if s/he is familiar with the Windows(TM) operating system because it is designed and built based on a user-friendly concept. This system does not require the client to continue hardware and software maintenance and updates. These are performed automatically by the server.

  16. Client Server Model Based DAQ System for Real-Time Air Pollution Monitoring

    Directory of Open Access Journals (Sweden)

    Vetrivel. P

    2014-01-01

    Full Text Available The proposed system consists of client server model based Data-Acquisition Unit. The Embedded Web Server integrates Pollution Server and DAQ that collects air Pollutants levels (CO, NO2, and SO2. The Pollution Server is designed by considering modern resource constrained embedded systems. In contrast, an application server is designed to the efficient execution of programs and scripts for supporting the construction of various applications. While a pollution server mainly deals with sending HTML for display in a web browser on the client terminal, an application server provides access to server side logic for pollutants levels to be use by client application programs. The Embedded Web Server is an arm mcb2300 board with internet connectivity and acts as air pollution server as this standalone device gathers air pollutants levels and as a Server. Embedded Web server is accessed by various clients.

  17. Post-processing in cardiovascular computed tomography. Performance of a client server solution versus a stand-alone solution; Bildnachverarbeitung in der kardiovaskulaeren Computertomografie. Performance von Client-Server- versus Einzelplatzloesung

    Energy Technology Data Exchange (ETDEWEB)

    Luecke, C.; Foldyna, B.; Andres, C.; Grothoff, M.; Nitzsche, S.; Gutberlet, M.; Lehmkuhl, L. [Leipzig Univ. - Herzzentrum (Germany). Abt. fuer Diagnostische und Interventionelle Radiologie; Boehmer-Lasthaus, S. [Siemens Healthcare Sector, Erlangen (Germany). Imaging and Therapy Div.

    2014-12-15

    Purpose: To compare the performance of server-based (CSS) versus stand-alone post-processing software (ES) for the evaluation of cardiovascular CT examinations (cvCT) and to determine the crucial steps. Data of 40 patients (20 patients for coronary artery evaluation and 20 patients prior to transcatheter aortic valve implantation [TAVI]) were evaluated by 5 radiologists with CSS and ES. Data acquisition was performed using a dual-source 128-row CT unit (SOMATOM Definition Flash, Siemens, Erlangen, Germany) and a 64-row CT unit (Brilliance 64, Philips, Hamburg, Germany). The following workflow was evaluated: Data loading, aorta and coronary segmentation, curved multiplanar reconstruction (cMPR) and 3 D volume rendering technique (3D-VRT), measuring of coronary artery stenosis and planimetry of the aortic annulus. The time requirement and subjective quality for the workflow were evaluated. The coronary arteries as well as the TAVI data could be evaluated significantly faster with CSS (5.5 ± 2.9 min and 8.2 ± 4.0 min, respectively) than with ES (13.9 ± 5.2 min and 15.2 ± 10.9 min, respectively, p = 0.01). Segmentation of the aorta (CSS: 1.9 ± 2.0 min, ES: 3.7 ± 3.3 min), generating cMPR of coronaries (CSS: 0.5 ± 0.2 min, ES: 5.1 ± 2.6 min), aorta and iliac vessels (CSS: 0.5 ± 0.4 min and 0.4 ± 0.4 min, respectively, ES: 1.6 ± 0.7 min and 2.8 ± 3 min, respectively) could be performed significantly faster with CSS than with ES with higher quality of cMPR, measuring of coronary stenosis and 3D-VRT (p < 0.05). Evaluation of cvCT can be accomplished significantly faster and better with CSS than with ES. The segmentation remains the most time-consuming workflow step, so optimization of segmentation algorithms could improve performance even further.

  18. A Client-Server System for Ubiquitous Video Service

    Directory of Open Access Journals (Sweden)

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  19. Rancang Bangun Sistem Presensi Mahasiswa Berbasis Fingerprint Client Server

    Directory of Open Access Journals (Sweden)

    Decki Noor Cahyadi

    2014-05-01

    Full Text Available Presensi mahasiswa merupakan salah satu peran penting dalam kegiatan belajar dan mengajar. Sistem Presensi melalui SIMAK di ST3 Telkom memiliki beberapa kekurangan, yaitu memerlukan waktu khusus untuk memanggil mahasiswa satu persatu, juga memiliki celah kecurangan, jika dosen yang bersangkutan tidak mengenali wajah mahasiswa, sehingga ada kemungkinan mahasiswa mengaku sebagai mahasiswa lain. Berdasarkan hasil analisa, ditawarkan sebuah inovasi baru untuk Sistem Presensi menggunakan fingerprint berbasis client server. Dalam pembangunan Sistem Presensi ini menggunakan metode pengembangan sistem waterfall, DBMS Microsoft Access dan Visual Basic 6.0 sebagai bahasa pemrogramannya. Hasil pengujian menunjukkan sistem informasi presensi sudah dapat berjalan dengan baik. Output sesuai dengan rancangan yang telah dibuat.

  20. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    Science.gov (United States)

    Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa

    2010-12-13

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  1. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    Directory of Open Access Journals (Sweden)

    Wataru Iwasaki

    Full Text Available In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration. The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past. The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  2. Location Privacy Techniques in Client-Server Architectures

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yiu, Man Lung

    2009-01-01

    A typical location-based service returns nearby points of interest in response to a user location. As such services are becoming increasingly available and popular, location privacy emerges as an important issue. In a system that does not offer location privacy, users must disclose their exact...... locations in order to receive the desired services. We view location privacy as an enabling technology that may lead to increased use of location-based services. In this chapter, we consider location privacy techniques that work in traditional client-server architectures without any trusted components other....... Third, their effectiveness is independent of the distribution of other users, unlike the k-anonymity approach. The chapter characterizes the privacy models assumed by existing techniques and categorizes these according to their approach. The techniques are then covered in turn according...

  3. Proving the correctness of client/server software

    Indian Academy of Sciences (India)

    Eyad Alkassar; Sebastian Bogan; Wolfgang J Paul

    2009-02-01

    Remote procedure calls (RPCs) lie at the heart of any client/server software. Thus, formal specification and verification of RPC mechanisms is a prerequisite for the verification of any such software. In this paper, we present a mathematical specification of an RPC mechanism and we outline how to prove the correctness of an implementation — say written in C — of this mechanism at the code level. We define a formal model of user processes running concurrently under a simple operating system, which provides inter-process communication and portmapper system calls. A simple theory of non-interference permits us to use conventional sequential program analysis between system calls (within the concurrent model). An RPC mechanism is specified and the correctness proof for server implementations, using this mechanism, is outlined. To the best of our knowledge this is the first treatment of the correctness of an entire RPC mechanism at the code level.

  4. Application of Windows Socket Technique to Communication Process of the Train Diagram Network System Based on Client/Server Structure

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper is focused on the technique for design and realization of the process communications about the computer-aided train diagram network system. The Windows Socket technique is adopted to program for the client and the server to create system applications and solve the problems of data transfer and data sharing in the system.

  5. Distributed analysis with CRAB: The client-server architecture evolution and commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Codispoti, G.; /INFN, Bologna /Bologna U.; Cinquilli, M.; /INFN, Perugia; Fanfani, A.; /Bologna U.; Fanzago, F.; /CERN /INFN, CNAF; Farina, F.; /CERN /INFN, Milan Bicocca; Lacaprara, S.; /INFN, Legnaro; Miccio, V.; /CERN /INFN, CNAF; Spiga, D.; /CERN /INFN, Perugia /Perugia U.; Vaandering, E.; /Fermilab

    2008-01-01

    CRAB (CMS Remote Analysis Builder) is the tool used by CMS to enable running physics analysis in a transparent manner over data distributed across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts. CRAB can be used as a direct interface to the computing system or can delegate the user task to a server. Major efforts have been dedicated to the client-server system development, allowing the user to deal only with a simple and intuitive interface and to delegate all the work to a server. The server takes care of handling the users jobs during the whole lifetime of the users task. In particular, it takes care of the data and resources discovery, process tracking and output handling. It also provides services such as automatic resubmission in case of failures, notification to the user of the task status, and automatic blacklisting of sites showing evident problems beyond what is provided by existing grid infrastructure. The CRAB Server architecture and its deployment will be presented, as well as the current status and future development. In addition the experience in using the system for initial detector commissioning activities and data analysis will be summarized.

  6. GrayStarServer: Server-side Spectrum Synthesis with a Browser-based Client-side User Interface

    Science.gov (United States)

    Short, C. Ian

    2016-10-01

    We present GrayStarServer (GSS), a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a timescale of a few seconds. The addition of spectrum synthesis annotated with line identifications extends the functionality and pedagogical applicability of GSS beyond that of its predecessor, GrayStar3 (GS3). The spectrum synthesis is based on a line list acquired from the NIST atomic spectra database, and the GSS post-processing and user interface client allows the user to inspect the plain text ASCII version of the line list, as well as to apply macroscopic broadening. Unlike GS3, GSS carries out the physical modeling on the server side in Java, and communicates with the JavaScript and HTML client via an asynchronous HTTP request. We also describe other improvements beyond GS3 such as a more physical treatment of background opacity and atmospheric physics, the comparison of key results with those of the Phoenix code, and the use of the HTML element for higher quality plotting and rendering of results. We also present LineListServer, a Java code for converting custom ASCII line lists in NIST format to the byte data type file format required by GSS so that users can prepare their own custom line lists. We propose a standard for marking up and packaging model atmosphere and spectrum synthesis output for data transmission and storage that will facilitate a web-based approach to stellar atmospheric modeling and spectrum synthesis. We describe some pedagogical demonstrations and exercises enabled by easily accessible, on-demand, responsive spectrum synthesis. GSS may serve as a research support tool by providing quick spectroscopic reconnaissance. GSS may be found at www.ap.smu.ca/~ishort/OpenStars/GrayStarServer/grayStarServer.html, and source tarballs for local installations of both GSS and LineListServer may be found at www.ap.smu.ca/~ishort/OpenStars/.

  7. GrayStarServer: Server-side spectrum synthesis with a browser-based client-side user interface

    CERN Document Server

    Short, C Ian

    2016-01-01

    I present GrayStarServer (GSS), a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a time-scale of a few seconds. The addition of spectrum synthesis annotated with line identifications extends the functionality and pedagogical applicability of GSS beyond that of its predecessor, GrayStar3 (GS3). The spectrum synthesis is based on a line list acquired from the NIST atomic spectra database, and the GSS post-processing and user interface (UI) client allows the user to inspect the plain text ASCII version of the line list, as well as to apply macroscopic broadening. Unlike GS3, GSS carries out the physical modeling on the server side in Java, and communicates with the JavaScript and HTML client via an asynchronous HTTP request. I also describe other improvements beyond GS3 such as more realistic modeling physics and use of the HTML element for higher quality plotting and rendering of result...

  8. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    Science.gov (United States)

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  9. Solid Waste Information and Tracking System Client Server Conversion Project Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    GLASSCOCK, J.A.

    2000-02-10

    The Project Management Plan governing the conversion of SWITS to a client-server architecture. The PMP describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion

  10. NeuroTerrain – a client-server system for browsing 3D biomedical image data sets

    Directory of Open Access Journals (Sweden)

    Nissanov Jonathan

    2007-02-01

    Full Text Available Abstract Background Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal. However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Results Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK, sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support

  11. Visualization of roaming client/server connection patterns during a wirelessly enabled disaster response drill.

    Science.gov (United States)

    Calvitti, Alan; Lenert, Leslie A; Brown, Steven W

    2006-01-01

    Assessment of how well a multiple client server system is functioning is a difficult task. In this poster we present visualization tools for such assessments. Arranged on a timeline, UDP client connection events are point-like. TCP client events are structured into intervals. Informative patterns and correlations are revealed by both sets. For the latter, comparison of two visualization schemes on the same timeline yields additional insights.

  12. An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.

    Science.gov (United States)

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-02-01

    In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.

  13. The Key Implementation Technology of Client/Server's Asynchronous Communication Programs

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces the implementation method,key technology and flowchart of Client/Server's asynchronous communication programs on Linux or Unix,and further explains a few problems to which should pay attention for improving CPU's efficiency in implementing asynchronous communication programs.

  14. A Rich Client-Server Based Framework for Convenient Security and Management of Mobile Applications

    Science.gov (United States)

    Badan, Stephen; Probst, Julien; Jaton, Markus; Vionnet, Damien; Wagen, Jean-Frédéric; Litzistorf, Gérald

    Contact lists, Emails, SMS or custom applications on a professional smartphone could hold very confidential or sensitive information. What could happen in case of theft or accidental loss of such devices? Such events could be detected by the separation between the smartphone and a Bluetooth companion device. This event should typically block the applications and delete personal and sensitive data. Here, a solution is proposed based on a secured framework application running on the mobile phone as a rich client connected to a security server. The framework offers strong and customizable authentication and secured connectivity. A security server manages all security issues. User applications are then loaded via the framework. User data can be secured, synchronized, pushed or pulled via the framework. This contribution proposes a convenient although secured environment based on a client-server architecture using external authentications. Several features of the proposed system are exposed and a practical demonstrator is described.

  15. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    Science.gov (United States)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  16. Research and Implementation of Client-server Based E-m ail Translator

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The design and implementation of EATS, a machine translation system for e-mail, are presented. It first puts forward the notion of "instan t machine trans lation service" and illustrates how it is provided through client-server mode i n EATS. Then this paper gives a panoramic view of the realization of Chinese-En glish bi-directional translation module through multi-engine strategy. The pro totype of the system has been successfully demonstrated in campus net in PPP mod e, with 70%~80% translation accuracy.

  17. Development of Client-Server Application by Using UDP Socket Programming for Remotely Monitoring CNC Machine Environment in Fixture Process

    Directory of Open Access Journals (Sweden)

    Darmawan Darmawan

    2016-08-01

    Full Text Available The use of computer technology in manufacturing industries can improve manufacturing flexibility significantly, especially in manufacturing processes; many software applications have been utilized to improve machining performance. However, none of them has discussed the abilities to perform direct machining. In this paper, an integrated system for remote operation and monitoring of Computer Numerical Control (CNC machines is put into consideration. The integrated system includes computerization, network technology, and improved holding mechanism. The work proposed by this research is mainly on the software development for such integrated system. It uses Java three-dimensional (3D programming and Virtual Reality Modeling Language (VRML at the client side for visualization of machining environment. This research is aimed at developing a control system to remotely operate and monitor a self-reconfiguration fixture mechanism of a CNC milling machine through internet connection and integration of Personal Computer (PC-based CNC controller, a server side, a client side and CNC milling. The performance of the developed system was evaluated by testing with one type of common protocols particularly User Datagram Protocol (UDP.  Using UDP, the developed system requires 3.9 seconds to complete the close clamping, less than 1 second to release the clamping and it can deliver 463 KiloByte.

  18. IBM eServer iSeries 400 Client/Server Programing%IBM eServer iSeries 400 Client/Server程序设计

    Institute of Scientific and Technical Information of China (English)

    曹玉华; 张维君

    2001-01-01

    编制高效安全的程序是面向IBM eServeriSeries 400服务器复杂应用,特别是ERP系统应用的基础.本文通过对面向iSerles 400服务器Client/Server程序设计的分析,探讨并提出系统应用程序设计项目的设计、实施、审核方案.

  19. Evaluating the Influence of the Client Behavior in Cloud Computing.

    Science.gov (United States)

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.

  20. FRIEND Engine Framework: A real time neurofeedback client-server system for neuroimaging studies

    Directory of Open Access Journals (Sweden)

    Rodrigo eBasilio

    2015-01-01

    Full Text Available In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of Functional Real-time Interactive Endogenous Modulation and Decoding (FRIEND. We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes.

  1. Solid waste information and tracking system client-server conversion project management plan

    Energy Technology Data Exchange (ETDEWEB)

    May, D.L.

    1998-04-15

    This Project Management Plan is the lead planning document governing the proposed conversion of the Solid Waste Information and Tracking System (SWITS) to a client-server architecture. This plan presents the content specified by American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards for software development, with additional information categories deemed to be necessary to describe the conversion fully. This plan is a living document that will be reviewed on a periodic basis and revised when necessary to reflect changes in baseline design concepts and schedules. This PMP describes the background, planning and management of the SWITS conversion. It does not constitute a statement of product requirements. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.

  2. Realization of client/server management information system of coal mine based on ODBC in geology and survey

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Q.; Mao, S.; Yang, F.; Han, Z. [Shandong University of Science and Technology (China). Geoscience Department

    2000-08-01

    The paper describes in detail the framework and the application theory of Open Database Connectivity (ODBC), the formation of a client/server system of geological and surveying management information system, and the connection of the various databases. Then systematically, the constitution and functional realization of the geological management information system are introduced. 5 refs., 5 figs.

  3. Base-on Cloud Computing A new type of distributed application server system design

    Directory of Open Access Journals (Sweden)

    Ying-ying Chen

    2012-11-01

    Full Text Available At this stage the application server systems, such as e-commerce platform , instant messaging system , enterprise information system and so on, will be led to lose connections , the data latency phenomena because of too much concurrent requests, application server architecture, system architecture, etc. In serious cases, the server is running blocked. The new type of application server system contains four parts: a client program, transfer servers, application servers and databases. Application server is the core of the system. Its performance determines the system’s performance. At the same time the application servers and transfer servers can be designed as the web service open to be used, and they can be achieved as distributed architecture by a number of hardware servers, which can effectively deal with high concurrent client application requests.

  4. Building Mail Server on Distributed Computing SYstem

    Institute of Scientific and Technical Information of China (English)

    AkihiroShibata; OsamuHamada; 等

    2001-01-01

    The electronic mail has become the indispensable function in daily job,and the server stability and performance are required.Using DCE and DFS we have built the distributed electronic mail sever,that is,servers such as SMPT,IMAP are distributed symmetrically,and provids the seamless access.

  5. Client Anticipations about Computer-Assisted Career Guidance System Outcomes.

    Science.gov (United States)

    Osborn, Debra S.; Peterson, Gary W.; Sampson, James P., Jr.; Reardon, Robert C.

    2003-01-01

    This study describes how 55 clients from a career center at a large, southeastern university anticipated using computer-assisted career guidance (CACG) systems to help in their career decision making and problem solving. Responses to a cued and a free response survey indicated that clients' most frequent anticipations included increased career…

  6. A Satellite Data-Driven, Client-Server Decision Support Application for Agricultural Water Resources Management

    Science.gov (United States)

    Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.

    2016-01-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that

  7. The HELIOS Unification Bus: a toolbox to develop client/server applications.

    Science.gov (United States)

    Sauquet, D; Jean, F C; Lemaitre, D; Zaplétal, E; Degoulet, P

    1994-12-01

    In the medical domain, new developments commonly rely on client/server architectures. But face to distributed environments, the software developers encounter a tremendously increasing complexity when building integrated applications. This paper presents the HELIOS Unification Bus (HUB), a communication integration framework for the HELIOS medical software engineering environment that allows the exchange of data between components that can be hosted on heterogeneous machines linked by a network. The HUB is developed as a C++ toolbox over UNIX and TCP/IP. It includes a message routing entity called router and a generic application programming interface (API), implemented as a C++ library, that allows to build easily software components compliant with the standardised HELIOS language. Messages conveyed by the bus are composite objects that are serialized to be transmitted over the bus using the ASN.1 ISO presentation protocol. The article describes the use of the bus to ease the development and execution of distributed medical applications and its role from the communication integration standpoint.

  8. An Intra-Server Interconnect Fabric for Heterogeneous Computing

    Institute of Scientific and Technical Information of China (English)

    曹政; 刘小丽; 李强; 刘小兵; 王展; 安学军

    2014-01-01

    With the increasing diversity of application needs and computing units, the server with heterogeneous pro-cessors is more and more widespread. However, conventional SMP/ccNUMA server architecture introduces communication bottleneck between heterogeneous processors and only uses heterogeneous processors as coprocessors, which limits the efficiency and flexibility of using heterogeneous processors. To solve this problem, this paper proposes an intra-server inter-connect fabric that supports both intra-server peer-to-peer interconnection and I/O resource sharing among heterogeneous processors. By connecting processors and I/O devices with the proposed fabric, heterogeneous processors can perform direct communication with each other and run in stand-alone mode with shared intra-server resources. We design the proposed fabric by extending the de-facto system I/O bus protocol PCIe (Peripheral Computer Interconnect Express) and implement it with a single chip cZodiac. By making full use of PCIe’s original advantages, the interconnection and the I/O sharing mechanism are light weight and efficient. Evaluations that have been carried out on both the FPGA (Field Programmable Gate Array) prototype and the cycle-accurate simulator demonstrate that our design is feasible and scalable. In addition, our design is suitable for not only the heterogeneous server but also the high density server.

  9. Breaking through with Thin-Client Technologies: A Cost Effective Approach for Academic Libraries.

    Science.gov (United States)

    Elbaz, Sohair W.; Stewart, Christofer

    This paper provides an overview of thin-client/server computing in higher education. Thin-clients are like PCs in appearance, but they do not house hard drives or localized operating systems and cannot function without being connected to a server. Two types of thin-clients are described: the Network Computer (NC) and the Windows Terminal (WT).…

  10. Free Software Development. 4. Client-Server Implementation of Bone Age Assessment Calculations

    Directory of Open Access Journals (Sweden)

    Sorana Daniela BOLBOACĂ

    2003-03-01

    Full Text Available In pediatrics, bone age also called skeletal maturity, an expression of biological maturity of a child, is an important quantitative measure for the clinical diagnosis of endocrinological problems and growth disorders. The present paper discusses a Java script implementation of Tanner-Whitehouse Method on computer, with complete graphical interface that include pictures and explanations for every bone. The program allows to select a stage (from a set of 7 or 8 stages for every bone (from a set of 20 bones, and also allow user to input some specific data such as natural age, sex, place of residence. Based on TW2 reported values, selected and input data, the program compute the bone age. Java script functions and objects were used in order to make an efficient and adaptive program. Note that in classic way, the program implementation it requires more than 160 groups of instructions only for user interface design. Using of dynamic creation of page, the program became smaller and efficient. The program was tested and put on a web server to serve for directly testing via http service and from where can also be download and runes from a personal computer without internet connection: http://vl.academicdirect.ro/medical_informatics/bone_age/v1.0/

  11. Effective solutions in introducing Server-Based Computing into a hospital information system.

    Science.gov (United States)

    Kuwata, Shigeki; Teramoto, Kei; Matsumura, Yasushi; Kushniruk, Andre W; Borycki, Elizabeth M; Kondoh, Hiroshi

    2009-01-01

    Server-Based Computing (SBC) is a technology for terminal administration that achieves higher security at lower expense. Use of SBC in large hospitals, however, is not widespread because methods to effectively implement the technology have not been fully established. We present a system design that uses SBC in a large-scale hospital and then discuss the implementation problems and their solutions. With the exception of network traffic estimates, the server size estimates were validated. Three results from an evaluation of an SBC implementation were: 1) security was re-enforced by applying multiple-policy adaptation to a single client terminal, 2) cost reduction was realized by having fewer PC failures and a lower power consumption, and 3) user-roaming was found to be effective in reducing the number of iterative operations performed by users.

  12. Windows Home Server users guide

    CERN Document Server

    Edney, Andrew

    2008-01-01

    Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist

  13. Client-Server and Peer-to-Peer Ad-hoc Network for a Flexible Learning Environment

    Directory of Open Access Journals (Sweden)

    Ferial Khaddage

    2011-01-01

    Full Text Available Peer-to-Peer (P2P networking in a mobile learning environment has become a popular topic of research. One of the new emerging research ideas is on the ability to combine P2P network with server-based network to form a strong efficient portable and compatible network infrastructure. This paper describes a unique mobile network architecture, which reflects the on-campus students’ need for a mobile learning environment. This can be achieved by combining two different networks, client-server and peer-to-peer ad-hoc to form a sold and secure network. This is accomplished by employing one peer within the ad-hoc network to act as an agent-peer to facilitate communication and information sharing between the two networks. It can be implemented without any major changes to the current network technologies, and can combine any wireless protocols such as GPRS, Wi-Fi, Bluetooth, and 3G.

  14. Rancang Bangun Keanggotaan Perpustakaan STT Telematika Telkom Menggunakan RFID Berbasis Java 2 Standard Edition Dengan Konsep Client Server

    Directory of Open Access Journals (Sweden)

    Yana Yuniarsyah

    2013-05-01

    Full Text Available RFID technology is a new technology that hasn’t been widely applied. The existence of this technology can reduce the disadvantages of barcode technology. One application of RFID technology is used for a library card. STT Telematika Library is a library that uses a membership card to borrow and return transactions only. The existence of RFID technology in the card member can create a multifunctional card, in addition to borrow and return books transactions, membership cards can be used for visitor attendance too. Distribution of visitor attendance and report library using client-server concept, thus make it easier for librarians in data management. The programming language used in the design of Library Information System is a Java 2 Standard Edition (J2SE using NetBeans 7.0 as IDE. Storage Library using the MySQL database. Software design method using waterfall or linear sequential models. Model design to make information sistem using Unified Modeling Language (UML like usecase diagram, activity diagram, and class diagram. Database design model using Entity Relationship Diagram (ERD for development information library system. Testing library information system have form with testing user requirements, test the program using blacbox testing, and testing the user. RFID used for library information systems have form such as RFID reader which used to read the information carried by the RFID tag and RFID tag used to transmit information to the RFID reader. The success of the client-server concept comes from the success of visitor attendance and show a report from the client, and the success of server to store visitor attendance data.

  15. PONDEROSA-C/S: client-server based software package for automated protein 3D structure determination.

    Science.gov (United States)

    Lee, Woonghee; Stark, Jaime L; Markley, John L

    2014-11-01

    Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727-1728. doi: 10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nuclear Overhauser data sets ((13)C- and/or (15)N-NOESY). The output is a set of assigned NOEs and 3D structural models for the protein. Ponderosa Analyzer supports the visualization, validation, and refinement of the results from Ponderosa Server. These tools enable semi-automated NMR-based structure determination of proteins in a rapid and robust fashion. We present examples showing the use of PONDEROSA-C/S in solving structures of four proteins: two that enable comparison with the original PONDEROSA package, and two from the Critical Assessment of automated Structure Determination by NMR (Rosato et al. in Nat Methods 6:625-626. doi: 10.1038/nmeth0909-625 , 2009) competition. The software package can be downloaded freely in binary format from http://pine.nmrfam.wisc.edu/download_packages.html. Registered users of the National Magnetic Resonance Facility at Madison can submit jobs to the PONDEROSA-C/S server at http://ponderosa.nmrfam.wisc.edu, where instructions, tutorials, and instructions can be found. Structures are normally returned within 1-2 days.

  16. A Discussion of Thin Client Technology for Computer Labs

    CERN Document Server

    Martínez-Mateo, Jesús; Pérez-Rey, David

    2010-01-01

    Computer literacy is not negotiable for any professional in an increasingly computerised environment. Educational institutions should be equipped to provide this new basic training for modern life. Accordingly, computer labs are an essential medium for education in almost any field. Computer labs are one of the most popular IT infrastructures for technical training in primary and secondary schools, universities and other educational institutions all over the world. Unfortunately, a computer lab is expensive, in terms of both initial purchase and annual maintenance costs, and especially when we want to run the latest software. Hence, research efforts addressing computer lab efficiency, performance or cost reduction would have a worldwide repercussion. In response to this concern, this paper presents a survey on thin client technology for computer labs in educational environments. Besides setting out the advantages and drawbacks of this technology, we aim to refute false prejudices against thin clients, identif...

  17. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2014-01-01

    Full Text Available We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users’ public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  18. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    Science.gov (United States)

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  19. Construction of a nuclear data server using TCP/IP

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko; Sakai, Osamu [Kyushu Univ., Fukuoka (Japan)

    1997-03-01

    We construct a nuclear data server which provides data in the evaluated nuclear data library through the network by means of TCP/IP. The client is not necessarily a user but a computer program. Two examples with a prototype server program are demonstrated, the first is data transfer from the server to a user, and the second is to a computer program. (author)

  20. Display graphical information optimization methods in a client-server information system

    OpenAIRE

    Юрий Викторович Мазуревич; Андрей Александрович Болдак

    2015-01-01

    This paper presents an approach to reduce load time and volume of data necessary to display web page due to server side preprocessing. Measurement of this approach’s effectivity has been conducted. There were discovered conditions in which this approach will be the most effective, its disadvantages and presented ways to reduce them

  1. Display graphical information optimization methods in a client-server information system

    Directory of Open Access Journals (Sweden)

    Юрий Викторович Мазуревич

    2015-07-01

    Full Text Available This paper presents an approach to reduce load time and volume of data necessary to display web page due to server side preprocessing. Measurement of this approach’s effectivity has been conducted. There were discovered conditions in which this approach will be the most effective, its disadvantages and presented ways to reduce them

  2. Experience of public procurement of Open Compute servers

    Science.gov (United States)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  3. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    Science.gov (United States)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  4. 根据客户网络应答的DNS服务器设计与实现%Design and Implementation of the DNS Server Which can Response Requests of Client According to the Network the Client belong to

    Institute of Scientific and Technical Information of China (English)

    黄勇萍

    2012-01-01

    To solve the "bottleneck" problem of bandwidth which is caused by network interconnection,many of the WWW server or mail server applying different network IP.If these different network's IP corresponds to different domain name,the meaning is that Servers distinguish between the different lines of the visit by different domain names.However,this method will cause inconvenience to Internet users.And Design and Implementation of the DNS Server which can response requests of Client according to the Network the Client belong to is not only solve the "bottleneck" problem of bandwidth but also facilitate Internet users.Basing on the Windows platform,this paper describes the design and implementation of a DNS server,which can response requests of Client according to the Network the Client belong to.%为了解决网络互连带宽"瓶颈"问题,很多单位的WWW服务器或邮件服务器同时申请不同网络的IP。若这些服务器所申请的不同网络IP对应不同的域名,即通过不同域名来区分不同的访问线路,会给用户访问服务器时带来不便。而一种能根据客户源IP地址所归属网络,把同一个域名解析成不同IP地址的DNS服务器的实现,既解决带宽"瓶颈"问题又方便网民。在Windows平台下,设计和实现了具有根据客户所归属网络应答功能的DNS服务器。

  5. TCP/IP Communication between Server and Client in Multi User Remote Lab Applications

    Directory of Open Access Journals (Sweden)

    Andreas Pester

    2008-07-01

    Full Text Available Remote labs in difference to virtual labs allow as usual only a single user access. To manage the user access for such remote experiments a reservation system is used. The aim of this work is to develop a simultaneous multi user access to the lab server and the remote experiment. This approach was tested for the READ remote lab and a Microcontroller remote lab, installed at the CUAS. The system controlled by LabView has been implemented using a Data Acquisition Card from National instruments. The performance of the simultaneous access was tested under load with a variable number of users. For the MRL a queue gives access to the peripherals to the main user, while the others wait for their time slot to use the system. This was implemented in such a way due to the synchronous characteristics of this lab.

  6. Advanced 3-D analysis, client-server systems, and cloud computing—Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement

    Science.gov (United States)

    Zimmermann, Mathis; Falkner, Juergen

    2013-01-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  7. Creating A Model HTTP Server Program Using java

    CERN Document Server

    Veerasamy, Bala Dhandayuthapani

    2010-01-01

    HTTP Server is a computer programs that serves webpage content to clients. A webpage is a document or resource of information that is suitable for the World Wide Web and can be accessed through a web browser and displayed on a computer screen. This information is usually in HTML format, and may provide navigation to other webpage's via hypertext links. WebPages may be retrieved from a local computer or from a remote HTTP Server. WebPages are requested and served from HTTP Servers using Hypertext Transfer Protocol (HTTP). WebPages may consist of files of static or dynamic text stored within the HTTP Server's file system. Client-side scripting can make WebPages more responsive to user input once in the client browser. This paper encompasses the creation of HTTP server program using java language, which is basically supporting for HTML and JavaScript.

  8. Evaluating Thin Client Computers for Use by the Polish Army

    Science.gov (United States)

    2006-06-01

    36 Real Player Windows Y 37 Windows Media Player Windows Y Table 2. List of Software and Tests Results B. PERFORMANCE EVALUATION The second ...acquired – this is for use in case of a primary server crash; Participation in eLearning program – internet-based courses on Microsoft products.71... Languages OLP NL Device CAL 30x Windows Server CAL 2003 Polish OLP NL Device CAL 30x Windows Terminal Svr CAL 2003 WinNT Polish OLP NL Device CAL

  9. Framework for Deploying Client/Server Distributed Database System for effective Human Resource Information Management Systems in Imo State Civil Service of Nigeria

    Directory of Open Access Journals (Sweden)

    Josiah Ahaiwe

    2012-08-01

    Full Text Available The information system is an integrated system that holds financial and personnel records of persons working in various branches of Imo state civil service. The purpose is to harmonize operations, reduce or if possible eliminate redundancy and control the introduction of “ghost workers” and fraud in pension management. In this research work, an attempt is made to design a frame work for deploying a client/server distributed database system for a human resource information management system with a scope on Imo state civil service in Nigeria. The system consists of a relational database of personnel variables which could be shared by various levels of management in all the ministries’ and their branches located all over the state. The server is expected to be hosted in the accountant general’s office. The system is capable of handling recruitment and promotions issues, training, monthly remunerations, pension and gratuity issues, and employment history, etc.

  10. SciServer Compute brings Analysis to Big Data in the Cloud

    Science.gov (United States)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  11. Hipax Cluster PACS Server

    Directory of Open Access Journals (Sweden)

    Ramin Payrovi

    2007-08-01

    Full Text Available Best Performace: With our Hipax Cluster PACS Server solution we are introducing the parallel computing concept as an extremely fast software system to the PACS world. In contrast to the common PACS servers, the Hipax Cluster PACS software is not only restricted to one or two computers, but can be used on a couple of servers controlling each other."nThus, the same services can be run simultaneously on different computers. The scalable system can also be expanded subsequently without lost of per-formance by adding further processors or Hipax server units, for example, if new clients or modalities are to be connected."nMaximum Failure Security: The Cluster Server concept offers high failure security. If one of the server PCs breaks down, the services can be assumed by another Hipax server unit, temporarily. If the overload of one of the server PCs is imminent, the services will be carried out by another Hipax unit (load balancing. To increase the security, e.g. against fire, the single Hipax servers can also be located separately. This concept offers maximum security, flexibility, performance, redundancy and scalability."nThe Hipax Cluster PACS Server is easy to be administrated using a web interface. In the case of a system failure (e.g. overloading, breakdown of a server PC the system administrator receives a mes-sage via Email and is so enabled to solve the problem."nFeatures"n• Based on SQL database"n• Different services running on separate PCs"n• The Hipax Server unis are coordinated and able to control each other"n• Exponentiates the power of a cluster server to the whole PACS (more processors"n• Scalable to the demands"n• Maximum performance"n• Load balancing for optimum efficiency"n• Maximum failure security because of expo-nentiated redundancy"n• Warning Email automatically sent to the system administrator in the case of failure"n• Web interface for system configuration"n• Maintenance without shut down the system

  12. Huygens file server and storage architecture

    NARCIS (Netherlands)

    Bosch, Peter; Mullender, Sape; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage architectu

  13. Verifiable Computation with Massively Parallel Interactive Proofs

    CERN Document Server

    Thaler, Justin; Mitzenmacher, Michael; Pfister, Hanspeter

    2012-01-01

    As the cloud computing paradigm has gained prominence, the need for verifiable computation has grown increasingly urgent. The concept of verifiable computation enables a weak client to outsource difficult computations to a powerful, but untrusted, server. Protocols for verifiable computation aim to provide the client with a guarantee that the server performed the requested computations correctly, without requiring the client to perform the computations herself. By design, these protocols impose a minimal computational burden on the client. However, existing protocols require the server to perform a large amount of extra bookkeeping in order to enable a client to easily verify the results. Verifiable computation has thus remained a theoretical curiosity, and protocols for it have not been implemented in real cloud computing systems. Our goal is to leverage GPUs to reduce the server-side slowdown for verifiable computation. To this end, we identify abundant data parallelism in a state-of-the-art general-purpose...

  14. Cryptanalysis of Some Client-to-Client Password-Authenticated Key Exchange Protocols

    Directory of Open Access Journals (Sweden)

    Tianjie Cao

    2009-06-01

    Full Text Available Client-to-Client Password-Authenticated Key Exchange (C2C-PAKE protocols allow two clients establish a common session key based on their passwords. In a secure C2C-PAKE protocol, there is no computationally bounded adversary learns anything about session keys shared between two clients. Especially a participating server should not learn anything about session keys. Server- compromise impersonation resilience is another desirable security property for a C2C-PAKE protocol. It means that compromising the password verifier of any client A should not enable outside adversary to share session key with A. Recently, Kwon and Lee proposed four C2C-PAKE protocols in the three-party setting, and Zhu et al. proposed a C2C-PAKE protocol in the cross-realm setting. All the proposed protocols are claimed to resist server compromise. However, in this paper, we show that Kwon and Lee’s protocols and Zhu et al’s protocol exist server compromise attacks, and a malicious server can mount man-in-themiddle attacks and can eavesdrop the communication between the two clients.

  15. Server consolidation for heterogeneous computer clusters using Colored Petri Nets and CPN Tools

    Directory of Open Access Journals (Sweden)

    Issam Al-Azzoni

    2015-10-01

    Full Text Available In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs. Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

  16. Managing Data Persistence in Network Enabled Servers

    Directory of Open Access Journals (Sweden)

    Eddy Caron

    2005-01-01

    Full Text Available The GridRPC model [17] is an emerging standard promoted by the Global Grid Forum (GGF that defines how to perform remote client-server computations on a distributed architecture. In this model data are sent back to the client at the end of every computation. This implies unnecessary communications when computed data are needed by an other server in further computations. Since, communication time is sometimes the dominant cost of remote computations, this cost has to be lowered. Several tools instantiate the GridRPC model such as NetSolve developed at the University of Tennessee, Knoxville, USA, and DIET developed at LIP laboratory, ENS Lyon, France. They are usually called Network Enabled Servers (NES. In this paper, we present a discussion of the data management solutions chosen for these two NES (NetSolve and DIET as well as experimental results.

  17. Lightweight Tactical Client: A Capability-Based Approach to Command Post Computing

    Science.gov (United States)

    2015-12-01

    SUBTITLE LIGHTWEIGHT TACTICAL CLIENT: A CAPABILITY-BASED APPROACH TO COMMAND POST COMPUTING 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...operational standpoint. For example, a requirement of a Command Post Client (ref. 1) is the capability to operate for an extended period of time (48+ hr...operations during disconnected, intermittent, and latent states including fully disconnected operations  Able to operate on a physically light

  18. Client-to-client Password-Based Authenticated Key Establishment in a Cross-Realm Setting

    Directory of Open Access Journals (Sweden)

    Shuhua Wu

    2009-09-01

    Full Text Available The area of password-based authenticated key establishment protocols has been the subject of a vast amount of work in the last few years due to its practical aspects. Despite the attention given to it, most passwordauthenticated key establishment (PAKE schemes in the literature consider authentication between a client and a sever. Although some of them are extended to a threeparty PAKE protocol, in which a trusted server exists to mediate between two clients to allow mutual authentication, they are less considered in a cross-realm setting like in kerberos system. In this paper, we propose a provably secure password-authenticated key establishment protocol in a cross-realm setting where two clients in different realms obtain a secret session key as well as mutual authentication, with the help of respective servers. We deal with it using ideas similar to those used in the three-party protocol due to M. Abdalla et al. In our protocol, each client firstly establish secure channel with its server and then the servers securely distribute a fresh common session key to the two clients. One of the attractive features is that our protocol can be easily extended to a more general scenario where a common key should be established among more than two clients. Moreover, analysis shows that the proposed protocol has a per-user computational cost of the underlying two-party encrypted key exchange.

  19. Client Mobile Software Design Principles for Mobile Learning Systems

    Directory of Open Access Journals (Sweden)

    Qing Tan

    2009-01-01

    Full Text Available In a client-server mobile learning system, client mobile software must run on the mobile phone to acquire, package, and send student’s interaction data via the mobile communications network to the connected mobile application server. The server will receive and process the client data in order to offer appropriate content and learning activities. To develop the mobile learning systems there are a number of very important issues that must be addressed. Mobile phones have scarce computing resources. They consist of heterogeneous devices and use various mobile operating systems, they have limitations with their user/device interaction capabilities, high data communications cost, and must provide for device mobility and portability. In this paper we propose five principles for designing Client mobile learning software. A location-based adaptive mobile learning system is presented as a proof of concept to demonstrate the applicability of these design principles.

  20. Multiple Servers - Queue Model for Agent Based Technology in Cache Consistence Maintenance of Mobile Environment

    Directory of Open Access Journals (Sweden)

    G.Shanmugarathinam

    2013-01-01

    Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.

  1. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2001-01-01

    We present an algorithm for computing a near-optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...

  2. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2000-01-01

    We present an algorithm for computing a near optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...

  3. The Internet accessible mathematical computation framework

    Institute of Scientific and Technical Information of China (English)

    Paul S. Wang; Simon Gray; Norbert Kajler; Dongdai Lin; Weidong Liao; Xiao Zou

    2004-01-01

    The Internet Accessible Mathematical Computation (IAMC) framework aims to make it easy to supply mathematical computing powers over the Internet/Web. The protocol-based IAMC framework enables developers to create interoperable clients and servers easily and independently. Presented are conceptual and experimental work on the IAMC framework architecture and major components: the Mathematical Computation Protocol (MCP), a client prototype (Dragonfly), a server prototype (Starfish), a mathematical encoding converter (XMEC), and an open mathematical compute engine interface (OMEI).

  4. Unconditionally verifiable blind computation

    CERN Document Server

    Fitzsimons, Joseph F

    2012-01-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output and computation remain private. Recently the authors together with Broadbent proposed a universal unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol, or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. In this paper we extend the BQC protocol presented in [Broadbent, Fitzsimons and Kashefi, FOCS 2009 p517] with new functionality allowing blind computational basis m...

  5. Pacific Missile Test Center Information Resources Management Organization (code 0300): The ORACLE client-server and distributed processing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Beckwith, A. L.; Phillips, J. T.

    1990-06-10

    Computing architectures using distributed processing and distributed databases are increasingly becoming considered acceptable solutions for advanced data processing systems. This is occurring even though there is still considerable professional debate as to what truly'' distributed computing actually is and despite the relative lack of advanced relational database management software (RDBMS) capable of meeting database and system integrity requirements for developing reliable integrated systems. This study investigates the functionally of ORACLE data base management software that is performing distributed processing between a MicroVAX/VMS minicomputer and three MS-DOS-based microcomputers. The ORACLE database resides on the MicroVAX and is accessed from the microcomputers with ORACLE SQL*NET, DECnet, and ORACLE PC TOOL PACKS. Data gathered during the study reveals that there is a demonstrable decrease in CPU demand on the MicroVAX, due to distributed processing'', when the ORACLE PC Tools are used to access the database as opposed to database access from dumb'' terminals. Also discovered were several hardware/software constraints that must be considered in implementing various software modules. The results of the study indicate that this distributed data processing architecture is becoming sufficiently mature, reliable, and should be considered for developing applications that reduce processing on central hosts. 33 refs., 2 figs.

  6. Langfristige Performance mit Thin Clients bei HSBC Trinkaus

    Science.gov (United States)

    HSBC Trinkaus nutzt Thin Clients als grafikstarke Desktops einer Citrix-basierenden Server Based Computing-Umgebung. Die neuen Dualview-Arbeitsplätze sind zuverlässiger, pflegeleichter und stromsparender als die bisherigen Terminal-PCs. Selbst die Administration erfolgt im Zweibildschirmbetrieb.

  7. Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation

    Directory of Open Access Journals (Sweden)

    Blanton Marina

    2016-10-01

    Full Text Available Computation based on genomic data is becoming increasingly popular today, be it for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data. This is particularly true when the data is used in computation for what we call recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user’s data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one’s genomic data.

  8. The UMLS Knowledge Source server.

    Science.gov (United States)

    McCray, A T; Razi, A

    1995-01-01

    The UMLS Knowledge Source server is an evolving tool for accessing information stored in the UMLS Knowledge Sources. The system architecture is based on the client-server paradigm wherein remote site users send their requests to a centrally managed server at the U.S. National Library of Medicine. The client programs can run on platforms supporting the TCP/IP communication protocol. Access to the system is provided through a command-line interface and through an Application Programming Interface.

  9. Analysis of Compute Vs Retrieve Intensive Web Applications and Its Impact On The Performance Of A Web Server

    Directory of Open Access Journals (Sweden)

    Syed Mutahar Aaqib

    2012-01-01

    Full Text Available The World Wide Web (WWW has undergone remarkable change over the past few years, placing substantially heavy load on Web servers. Today’s web servers host web applications that demand high computational resources. Also some applications require heavy database retrieval processing, making server load even more critical. In this paper, performance of Apache web server running compute and retrieve-intensive web workloads is analyzed. Workload files implemented in three dynamic web programming technologies: PERL, PHP and Java Servlets are used with MySQL acting as a data source. Measurements are performed with the intent to analyze the impact of application workloads on the overall performance of the web server and determine which web technology yields better performance on Windows and Linux platforms. Experimental results depict that for both compute and retrieve intensive applications, PHP exhibits better performance than PERL and Java Servlets. A multiple linear regression model was also developed to predict the web server performance and to validate the experimental results. This regression model showed that for compute and retrieve intensive web applications, PHP exhibits better performance than Perl and Java Servlets.

  10. Prepare for X-Win32 - the new X11 server software for Windows computers

    CERN Multimedia

    IT Department

    2011-01-01

    Starnet X-Win32 will replace Exceed as the X11 Server software on Windows computers by February 2012. X11 Server software allows a Windows user to have a graphical user interface on a remote Linux server. This change, initially motivated by a significant change of license conditions for Exceed, brings an easier integration of Windows and Linux logon mechanisms. At the same time, X-Win32 addresses the common use cases while providing a more intuitive configuration interface. CERN Predefined Connections will be available as before. They offer an easy way of starting applications on LXPLUS using PuTTY or starting the KDE, GNOME or ICE window managers. Since X-Win32 is better integrated with SSH and CERN Kerberos compared to Exceed, it is much simpler to set up secure access to Linux services. The decision to choose X-Win32 as the new X11 software resulted from an evaluation that involved various user communities and support teams. More information, including the documented use cases, is available at https://...

  11. Computation Offloading for Frame-Based Real-Time Tasks under Given Server Response Time Guarantees

    Directory of Open Access Journals (Sweden)

    Anas S. M. Toma

    2014-11-01

    Full Text Available Computation offloading has been adopted to improve the performance of embedded systems by offloading the computation of some tasks, especially computation-intensive tasks, to servers or clouds. This paper explores computation offloading for real-time tasks in embedded systems, provided given response time guarantees from the servers, to decide which tasks should be offloaded to get the results in time. We consider frame-based real-time tasks with the same period and relative deadline. When the execution order of the tasks is given, the problem can be solved in linear time. However, when the execution order is not specified, we prove that the problem is NP-complete. We develop a pseudo-polynomial-time algorithm for deriving feasible schedules, if they exist.  An approximation scheme is also developed to trade the error made from the algorithm and the complexity. Our algorithms are extended to minimize the period/relative deadline of the tasks for performance maximization. The algorithms are evaluated with a case study for a surveillance system and synthesized benchmarks.

  12. DEPTH: a web server to compute depth and predict small-molecule binding cavities in proteins.

    Science.gov (United States)

    Tan, Kuan Pern; Varadarajan, Raghavan; Madhusudhan, M S

    2011-07-01

    Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.

  13. 基于三级客户机/服务器模式的GIS软件平台设计与实现%Design and Implementation of GIS Platform Based on the Three-tiered Client/Server Pattern

    Institute of Scientific and Technical Information of China (English)

    熊汉江; 龚健雅

    2001-01-01

    Intemet/Intranet的快速发展、数据仓库技术的应用,使得GIS空间数据管理与应用呈现多用户、分布式和网络化的特点。面对这一发展趋势,传统GIS软件平台普遍采取的单机或二级客户机/服务器模式存在难以克服的缺陷,开发基于三级客户机/服务器模式的GIS软件平台成为当前研究的重要课题。本文详细介绍了基于三级客户机/服务器模式GIS软件平台的基本设计思路和体系结构,并且在此基础上,运用socket技术构造中间件,建立了一个试验性平台——VirtualWorld,同时介绍了在此体系上解决GIS互操作方案,最后对试验进行了简单分析。%The central or two-tiered Client/Server pattern have been adopted more commonly bymost of traditional GIS platform. But now, accelerated by the development of Internet/Intranet,the spatial data management and application of GIS is tending to multi-user and network distribution. The spatial data can be efficiently Stored by the improved relational DBMS such as Oracle,DB2. It is possible for us to develope the large and distributed GIS application, but the inefficiencyand worse security of traditional patterns restrict this developement.Compared to those patterns,the three-tiered Client/Server pattern has more advantage,and fits the trendcy of spatial datamanagement. It can solve the problem of effiency and security well.Above all it fits the demand ofsptatial data warehouse,which will be built with the data warehouse technique and used to storeand manange multiscale and spatial-temporal data in the future.   In this paper,the design of this new kind of GIS platform,which is based on the the threetiered Client/Server pattern, is introduced. This GIS platform consists of three components: theClient, the Server and the middleware. The Client has three components:the spatial data management module, the integrated GIS application and the ActiveX control.The spatial data

  14. Server-side Statistics Scripting in PHP

    Directory of Open Access Journals (Sweden)

    Jan de Leeuw

    1997-06-01

    Full Text Available On the UCLA Statistics WWW server there are a large number of demos and calculators that can be used in statistics teaching and research. Some of these demos require substantial amounts of computation, others mainly use graphics. These calculators and demos are implemented in various different ways, reflecting developments in WWW based computing. As usual, one of the main choices is between doing the work on the client-side (i.e. in the browser or on the server-side (i.e. on our WWW server. Obviously, client-side computation puts fewer demands on the server. On the other hand, it requires that the client downloads Java applets, or installs plugins and/or helpers. If JavaScript is used, client-side computations will generally be slow. We also have to assume that the client is installed properly, and has the required capabilities. Requiring too much on the client-side has caused browsing machines such as Netscape Communicator to grow beyond all reasonable bounds, both in size and RAM requirements. Moreover requiring Java and JavaScript rules out such excellent browsers as Lynx or Emacs W3. For server-side computing, we can configure the server and its resources ourselves, and we need not worry about browser capabilities and configuration. Nothing needs to be downloaded, except the usual HTML pages and graphics. In the same way as on the client side, there is a scripting solution, where code is interpreted, or a ob ject-code solution using compiled code. For the server-side scripting, we use embedded languages, such as PHP/FI. The scripts in the HTML pages are interpreted by a CGI program, and the output of the CGI program is send to the clients. Of course the CGI program is compiled, but the statistics procedures will usually be interpreted, because PHP/FI does not have the appropriate functions in its scripting language. This will tend to be slow, because embedded languages do not deal efficiently with loops and similar constructs. Thus a first

  15. Server and Client Synchronous in Multi-Channel Based on Unity3D%基于Unity3D的多通道下服务器客户端同步

    Institute of Scientific and Technical Information of China (English)

    汪瑞

    2015-01-01

    近年来,多通道技术已经日益成熟并得到广泛应用,而基于Unity3D引擎的多通道下服务器客户端同步,是进行Unity多人游戏开发的关键步骤。介绍两种创建服务器和客户端的方法,并使用第二种方法设计适合局域网中两名玩家操作的天空射击游戏,实验结果表明,该方法可行且灵敏度较高。%In recent years, the multi-channel technology has become more and mature, server and client synchronous in multi-channel based on U-nity3D is the key process to develop multi-players game in Unity. Introduces two methods to create server and client, and uses the second method design the spaceshotting game suitable for two players in the local area network, the experimental result shows that this method is feasible and has high sensitivity.

  16. Servers in SCADA applications

    Energy Technology Data Exchange (ETDEWEB)

    Marcuse, J.; Menz, B.; Payne, J.

    1995-12-31

    The rise of computerized data acquisition, storage and reporting systems has been driven by industries demand for advanced troubleshooting aids and continual, measurable process and product quality improvements. As US companies entered into global competition they discovered ever stiffer customer requirements. These requirements were especially stiff in the auto industry where the Japanese set a very high standard using SPC and other world class manufacturing methods. The architecture of these SCADA (supervisory control and data acquisition) systems has gone through several evolutionary stages over the last few years. This paper examines this evolution from the mainframe computer architecture used in the 1970`s, to the multi-tiered scheme of the 1980`s to the client-server architecture emerging today.

  17. Energy-efficient server management; Energieeffizientes Servermanagement

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, B.

    2003-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a project that aimed to develop an automatic shut-down system for the servers used in typical electronic data processing installations to be found in small and medium-sized enterprises. The purpose of shutting down these computers - the saving of energy - is discussed. The development of a shutdown unit on the basis of a web-server that automatically shuts down the servers connected to it and then interrupts their power supply is described. The functions of the unit, including pre-set times for switching on and off, remote operation via the Internet and its interaction with clients connected to it are discussed. Examples of the system's user interface are presented.

  18. Prototype for a Generic Thin—Client Remote Analysis Environment for CMS

    Institute of Scientific and Technical Information of China (English)

    C.D.Steenberg; J.J.Bunn; 等

    2001-01-01

    The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment.We describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client.The analysis environnment is based on existing HEP(Anaphe) and CMOS(CARF,ORCA,IGUANA)software thchnology on the server acessed from a variety of clients.A Java Analysis Studio (JAS,from SLAC)plug-in is being developed as a reference client.The server is operated as "Black box"on the proto-Tier2 system.ORCA Objectivity databases(e.g.an existing large CMS Muon sample)are hosted on the master and slave nodes,and remote clients can request processing of queries across the server nodes ,and get the histogram results returned and rendered in the client.The server is implemented using pure C++,and use XML-RPC as a language-neutral transport.This has several benefits,including much better scalability,better integration with CARF/ORCA,and importanly,Makes the work directly useful to other non-java general-purpose analysis and presentation tools such as Hippodraw,Lizard.or ROOT.

  19. Provable data possession for securing the data from untrusted server

    Directory of Open Access Journals (Sweden)

    S.Karthikeyan

    2015-03-01

    Full Text Available The model described for the use of Provable data Possession which allow the client to access the stored data at an Untrusted server that the server possesses the original data without retrieving it. This model executes the probabilistic proof of possession by random set of blocks which is derived from the server that dramatically reduces the cost of I/O. Sometimes the Client maintenance the constant amount of data which is used to verify the proof. The response protocol can transmit a small amount of data, which can minimize network communication. The two provably –Securer PDP Schemes presents more efficient schemes than previous solution .Even when compared with schemes that achieve weaker guarantees. It is the widely distributed storage systems. Using the experiment we can implement and verify the practicality of PDP and we can revel that the performance of the PDP that is bounded by disk I/O and that cannot be determined by computation.

  20. Drug-target interaction prediction: databases, web servers and computational models.

    Science.gov (United States)

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  1. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  2. Client Centred Desing

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Nielsen, Janni; Levinsen, Karin

    2008-01-01

    In this paper we argue for the use of Client Centred preparation phases when designing complex systems. Through Client Centred Design human computer interaction can extend the focus on end-users to alse encompass the client's needs, context and resources....

  3. "MedTRIS" (Medical Triage and Registration Informatics System): A Web-based Client Server System for the Registration of Patients Being Treated in First Aid Posts at Public Events and Mass Gatherings.

    Science.gov (United States)

    Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe

    2016-10-01

    First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.

  4. PoD: dynamically create and use remote PROOF clusters. A thin client concept.

    CERN Document Server

    CERN. Geneva

    2012-01-01

    PoD’s newly developed “pod-remote” command made it possible for users to utilize a thin client concept. In order to create dynamic PROOF clusters, users are now able to select a remote computer, even behind a firewall, to control a PoD server on...

  5. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  6. Empirical Analysis of Server Consolidation and Desktop Virtualization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2013-01-01

    Full Text Available Physical server transited to virtual server infrastructure (VSI and desktop device to virtual desktop infrastructure (VDI have the crucial problems of server consolidation, virtualization performance, virtual machine density, total cost of ownership (TCO, and return on investments (ROI. Besides, how to appropriately choose hypervisor for the desired server/desktop virtualization is really challenging, because a trade-off between virtualization performance and cost is a hard decision to make in the cloud. This paper introduces five hypervisors to establish the virtual environment and then gives a careful assessment based on C/P ratio that is derived from composite index, consolidation ratio, virtual machine density, TCO, and ROI. As a result, even though ESX server obtains the highest ROI and lowest TCO in server virtualization and Hyper-V R2 gains the best performance of virtual machine management; both of them however cost too much. Instead the best choice is Proxmox Virtual Environment (Proxmox VE because it not only saves the initial investment a lot to own a virtual server/desktop infrastructure, but also obtains the lowest C/P ratio.

  7. Delegating private quantum computations12

    Science.gov (United States)

    Broadbent, Anne

    2015-09-01

    We give a protocol for the delegation of quantum computation on encrypted data. More specifically, we show that in a client-server scenario, where the client holds the encryption key for an encrypted quantum register held by the server, it is possible for the server to perform a universal set of quantum gates on the quantum data. All Clifford group gates are non-interactive, while the remaining non-Clifford group gate that we implement (the p/8 gate) requires the client to prepare and send a single random auxiliary qubit (chosen among four possibilities), and exchange classical communication. This construction improves on previous work, which requires either multiple auxiliary qubits or two-way quantum communication. Using a reduction to an entanglement-based protocol, we show privacy against any adversarial server according to a simulation-based security definition.

  8. The FELICIA bulletin board system and the IRBIS anonymous FTP server: Computer security information sources for the DOE community. CIAC-2302

    Energy Technology Data Exchange (ETDEWEB)

    Orvis, W.J.

    1993-11-03

    The Computer Incident Advisory Capability (CIAC) operates two information servers for the DOE community, FELICIA (formerly FELIX) and IRBIS. FELICIA is a computer Bulletin Board System (BBS) that can be accessed by telephone with a modem. IRBIS is an anonymous ftp server that can be accessed on the Internet. Both of these servers contain all of the publicly available CIAC, CERT, NIST, and DDN bulletins, virus descriptions, the VIRUS-L moderated virus bulletin board, copies of public domain and shareware virus- detection/protection software, and copies of useful public domain and shareware utility programs. This guide describes how to connect these systems and obtain files from them.

  9. Optimizing Performance of Scientific Visualization Software to Support Frontier-Class Computations

    Science.gov (United States)

    2015-08-01

    assistance with accessing graphics processing unit ( GPU )- enabled nodes on the HPC utility server systems via the Portable Batch System (PBS) batch job... graphics processing unit ( GPU )-enabled and large memory compute nodes. The EnSight client will run on the first allocated node (which is the graphics ...Defense DR Clients distributed rendering clients GPU graphics processing unit HPC high-performance computing HPCMDC High-Performance Computing

  10. Study on the Distributed Routing Algorithm and Its Security for Peer-to-Peer Computing

    Institute of Scientific and Technical Information of China (English)

    ZHOU Shi-jie

    2005-01-01

    @@ By virtue of its great efficiency and graceful architecture, the Client/Server model has been prevalent for more than twenty years, but some disadvantages are also recognized. It is not so suitable for the next generation Internet (NGI), which will provide a high-speed communication platform. Especially, the service bottleneck of Client/Server model will become more and more severe in such high-speed networking environment. Some approaches have been proposed to solve such kind of disadvantages. Among these, distributed computing is considered an important candidate for Client/Server model.

  11. In silico characterization of antifreeze proteins using computational tools and servers

    Indian Academy of Sciences (India)

    K Sivakumar; S Balaji; Gangaradhakrishnan

    2007-09-01

    In this paper, seventeen different fish Antifreeze Proteins (AFPs) retrieved from Swiss-Prot database are analysed and characterized using In silico tools. Primary structure analysis shows that most of the AFPs are hydrophobic in nature due to the high content of non-polar residues. The presence of 11 cysteines in the rainbow smelt fish and sea raven fish AFPs infer that these proteins may form disulphide (SS) bonds, which are regarded as a positive factor for stability. The aliphatic index computed by Ex-Pasy’s ProtParam infers that AFPs may be stable for a wide range of temperature. Secondary structure analysis shows that most of the fish AFPs have predominant α-helical structures and rest of the AFPs have mixed secondary structure. The very high coil structural content of rainbow smelt fish and sea raven fish AFPs are due to the rich content of more flexible glycine and hydrophobic proline amino acids. Proline has a special property of creating kinks in polypetide chains and disrupting ordered secondary structure. SOSUI server predicts one transmembrane region in winter flounder fish and atlantic cod and two transmembrane regions in yellowtail flounder fish AFP. The predicted transmembrane regions were visualized and analysed using helical wheel plots generated by EMBOSS pepwheel tool. The presence of disulphide (SS) bonds in the AFPs Q01758 and P05140 are predicted by CYS_REC tool and also identified from the three-dimensional structure using Rasmol tool. The disulphide bonds identified from the three-dimensional structure using the Rasmol tool might be correct as the evaluation parameters are within the acceptable limits for the modelled 3D structures.

  12. Advancing the Power and Utility of Server-Side Aggregation

    Science.gov (United States)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  13. Computing a constrained control policy for a single-server queueing system

    DEFF Research Database (Denmark)

    Larsen, Christian

    We consider a single-server queueing system designed to serve homogeneous jobs. The jobs arrive to the system after a Poisson process and all processing times are deterministic. There is a set-up cost for starting up production and a holding cost rate is incurred for each job present. Also......, there is a service cost per job, which is a convex function of the service time. The control policy specifies when the server is on or off. It also specifies the state-dependent processing times. In order to avoid a very detailed control policy (which could be hard to implement) we will only allow the server to use...... control policy. Finally some numerical results are presented....

  14. Virtual Network Computing Testbed for Cybersecurity Research

    Science.gov (United States)

    2015-08-17

    client nodes. The client nodes are lightweight nodes that run user applications. Currently, our clients run Linux , Windows 7 and Windows XP. It is...servers and chat servers. The servers can be Windows servers, e.g. Win 2003, or Linux servers. Each server contains actual data, e.g. a database, and...vSphere and vCenter provide the basic capabilities enabling virtual servers to be configured and deployed on command . vSphere is a very mature and

  15. A web-based care-requiring client and Home Helper mutual support system.

    Science.gov (United States)

    Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Hahn, Allen W; Caldwell, W Morton

    2005-01-01

    For the improved efficiency of home care of the elderly, a web-based system has been developed to enable faster communications between care-requiring clients, their Home Helper and the care manager. Changes to care items, such as cooking, bathing, washing, cleaning and shopping are usually requested by the elderly client over the telephone. However, the care central office often requires 24 hours to process and respond to such spoken requests. The system we have developed consists of Internet client computers with liquid crystal input tablets, wireless Internet Java enabled mobile phones and a central office server that yields almost instant communication. The care clients enter requests on the liquid crystal tablet at their home and then their computer sends these requests to the server at the Home Helper central office. The server automatically creates a new file of the requested items, and then immediately transfers them to the care manager and Home Helper's mobile phone. With this non-vocal and paperless system, the care-requiring clients, who can easily operate the liquid crystal tablet, can very quickly communicate their needed care change requests to their Home Helper.

  16. Call-for-tender documentation in the area of servers, personal computers and networks; Ausschreibungsunterlagen im Server-, PC- und Netzwerk-Bereich

    Energy Technology Data Exchange (ETDEWEB)

    Grieder, T.; Huser, A.

    2003-07-01

    As a result of this work, sample texts, so-called performance sheets, have been drawn up for the invitation to tender for IT devices. As a supplement to the standard technical requirements, such as computer performance, memory capacity, etc., these texts cover the aspects of energy efficiency. The performance sheets can be enclosed with the invitations to tender as an appendix, or be used directly as text modules. They are supplemented by explanatory texts, which give information regarding technical terms, labels and possible technical realizations. Performance sheets and explanatory texts are included in the appendix to this report. The goal of these activities is to exert pressure on the market, which should ultimately lead to more efficient units. In addition, however, these texts should serve to make the offices placing the invitations to tender more aware of the energy efficiency aspect. Energy saving functions are fairly common for PCs and monitors nowadays. Reference to proved technical realisations can be made in the performance sheets. The situation is more difficult for servers. Although some technical solutions have been initiated, very little is known about practical applications. Further activities are necessary here. (author)

  17. MEMBANGUN SERVER BERBASIS LINUX PADA JARINGAN LAN DI LABOR SISTEM INFORMASI JURUSAN TEKNOLOGI INFORMASI POLITEKNIK NEGERI PADANG

    Directory of Open Access Journals (Sweden)

    Fifi Rasyidah

    2014-03-01

    Full Text Available The System Information Laboratory of Information Technology Department Polytechnic State of Padang has 30 units computer as education facilities to support learning process. All of computers used at same time in a learning section. This case causing trouble to monitoring each students activities. In order to get the solution for the lecturer, the writer then construct a server by using Linux operation system and client by using windows system operation in which Samba File Server is needed. By using this samba, the lecturer will be able to share the data and will be able to use the server as data storage media. Besides that, the writer will also use VNC (Virtual network connection to simplify the process of monitoring and supervising client working system. Based on the result gotten after the writer done some experiment, it can be concluded that Samba File Server can also be used after some configuration is applied on certain files. Moreover, the writer also conclude that VNC can control the entire of the client. The writer suggests that Samba File server which will be used is the latest version one which has more feature than the previous one, it is suggested that the configuration of VNC is applied on Ubuntu Linux since the service is available. Kata Kunci : Samba File Server, VNC, Ubuntu installation

  18. Multiversion Two-Phase Locking Concurrency Control Protocol in Real-Time Client /Server Database Systems%实时Client/Server数据库多版本两个阶段封锁并发控制协议

    Institute of Scientific and Technical Information of China (English)

    雷向东; 赵跃龙; 袁晓莉

    2005-01-01

    提出了实时Client/Server数据库多版本两阶段封锁并发控制协议.该协议具有多版本并发控制机制与两阶段封锁机制的优点,使用如下策略以减少延误截止时间事务数量:若冲突集中有比持锁事务Ti优先级高的事务,且Ti重启动不会延误截止时间,则Ti重启动,冲突集中优先级最高的事务获得锁;否则,冲突集中其它事务等待.为了提高只读事务的响应时间,客户端设有一致数据库影子,只读事务在客户端处理.通过仿真模拟,与2V2PL和OCC-TI-WAIT-50协议进行比较,结果表明:该并发控制协议不仅能有效降低事务延误截止时间率,而且能改善只读事务的响应时间,减少优先级高事务的锁等待时间.协议性能优于2V2PL协议和OCC-TI-WAIT-50协议.

  19. 客户机/服务器模式的远程监控系统设计%Establishing a remote monitoring and control system based on client/server model

    Institute of Scientific and Technical Information of China (English)

    荣天琪; 张宗杰; 刘彤

    2013-01-01

      随着远程办公需求的增加,以及企业便捷管理的需要,实现计算机远程监控是一种必然趋势。远程监控技术极大地方便了办公网络的维护与控制,通过 VC++编写 Windows 应用程序,实现网络的互联、聊天、文件传输以及远程桌面控制。介绍了实现基于客户机/服务器模式的远程监控系统所用到的基本算法与思路,分析了远程监控的基本原理,并提供了调试过程。%  With the growing demand for telecommuting and the enterprise management 's convenience, remote moni-toring is an inevitable trend.Remote monitoring technology greatly facilitates the maintenance and control of the telecommuting.The system is a Windows application programme written by VC ++to realize functions such as net-work interconnection, chat, file transfer, and remote desktop control.This article describes the basic algorithms and ideas used in remote monitoring system based on client /server model, analyzes the basic principles of remote monitoring, and provides the debugging procedure .

  20. Web-client based distributed generalization and geoprocessing

    Science.gov (United States)

    Wolf, E.B.; Howe, K.

    2009-01-01

    Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).

  1. Membangun Server Multicast Berbasis Streaming Menggunakan Centos

    Directory of Open Access Journals (Sweden)

    Irwan Susanto

    2009-11-01

    Full Text Available The development of IP-based technology contribute to the development of telecomunication and information technology.  One of  IP-based technology application is streaming multicast, as part of broadcasting. The streaming  process is made by accessing  Telkom-2 broadcast  through AKATEL LAN network, then  server forward it to clients using multicast IP system. Multicast IP is D-class IP, which is able to send data package in realtime. In multicast system, server only send one data package to  some clients with same speed transmition. The Telkom-2 broadcast is already accessed before   sent as data package. Server will access Telkom-2 broadcast using parabola antenna and Hughes modem, then forward it to clients through AKATEL LAN network. Clients must conect to server via AKATEL LAN network and already  instaled VLC player, in order to be able to access the Telkom-2 broadcast

  2. Liberate Mediacast Server

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ The Mediacast server schedules the retrieval of HTML content from multiple sources, then organizes and broadcasts the content to all clients on the appropriate channels at the specified times. The content is displayed upon subscriber request, when triggered by an event, or automatically. These broadcasts are generally performed in-band, but Mediacast can also broadcast over the out-of-band network. This allows subscribers to display and interact with a wide variety of specialized content without significantly increasing network traffic.

  3. Redirection of client/server relationship of X Window system as a simple, low-cost, departmental picture archiving and communication system solution for nuclear medicine.

    Science.gov (United States)

    Datz, F L; Baune, D A; Christian, P E

    1994-08-01

    Picture archiving and communication systems (PACS) offer significant advantages over current film-management techniques. However, PACS are complex and expensive, factors that have limited their entry into the radiology and nuclear medicine communities. We present a simple, low-cost PACS solution that allows viewing of images from different computer systems by redirection of the X Window system. In this technique, multiple copies of the imaging software are remotely opened from generic UNIX workstations interfaced to the main computer system via Transmission Control Protocol/Internet Protocol over Ethernet. The X Window system that provides the windowing system for the main computer is redirected to the workstations' displays. With this technique, viewing and processing of images on a remote station is virtually identical to working at the main computer's console. The technique requires that the commercial imaging system's hardware, operating system, and imaging software support multiuser multitasking and the execution of multiple copies of its imaging software, and that they use X Windows as the graphical system. Advantages of the technique include low cost, ease of maintenance, ease of interconnecting different types of computers, the capacity to view images regardless of file format, and the capacity to both view and process images. The latter is a necessity for modalities such as nuclear medicine. A disadvantage of the technique is that the number of nodes that can be supported is limited.

  4. Computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Shuk, K.

    1995-11-01

    The current computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory is that of a central server that is accessed by a terminal or terminal emulator. The initiative to move to a client/server environment is strong, backed by desktop machines becoming more and more powerful. The desktop machines can now take on parts of tasks once run entirely on the central server, making the whole environment computationally more efficient as a result. Services are tasks that are repeated throughout the environment such that it makes sense to share them; tasks such as email, user authentication and file transfer are services. The new client/;server environment needs to determine which services must be included in the environment for basic functionality. These services then unify the computing environment, not only for the forthcoming ASSIST+, but for Administrative Information Systems as a whole, joining various server platforms with heterogeneous desktop computing platforms.

  5. Improving completion rates for client intake forms through Audio Computer-Assisted Self-Interview (ACASI): results from a pilot study with the Avon Breast Health Outreach Program.

    Science.gov (United States)

    Hallum-Montes, Rachel; Senter, Lindsay; D'Souza, Rohan; Gates-Ferris, Kathryn; Hurlbert, Marc; Anastario, Michael

    2014-01-01

    This study compares rates of completion of client intake forms (CIFs) collected via three interview modes: audio computer-assisted self-interview (ACASI), face-to-face interview (FFI), and self-administered paper-based interview (SAPI). A total of 303 clients served through the Avon Breast Health Outreach Program (BHOP) were sampled from three U.S. sites. Clients were randomly assigned to complete a standard CIF via one of the three interview modes. Logistic regression analyses demonstrated that clients were significantly more likely to complete the entire CIF via ACASI than either FFI or SAPI. The greatest observed differences were between ACASI and SAPI; clients were almost six times more likely to complete the CIF via ACASI as opposed to SAPI (AOR = 5.8, p < .001). We recommend that where feasible, ACASI be utilized as an effective means of collecting client-level data in healthcare settings. Adoption of ACASI in health centers may translate into higher completion rates of intake forms by clients, as well as reduced burden on clinic staff to enter data and review intake forms for completion.

  6. Exam 70-411 administering Windows Server 2012

    CERN Document Server

    Course, Microsoft Official Academic

    2014-01-01

    Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of  a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program

  7. PERFORMANCE EVALUATION OF DIRECT PROCESSOR ACCESS FOR NON DEDICATED SERVER

    Directory of Open Access Journals (Sweden)

    P. S. BALAMURUGAN

    2010-10-01

    Full Text Available The objective of the paper is to design a co processor for a desktop machine which enables the machine to act as non dedicated server, such that the co processor will act as a server processor and the multi-core processor to act as desktop processor. By implementing this methodology a client machine can be made to act as a non dedicated server and a client machine. These type of machine can be used in autonomy networks. This design will lead to design of a cost effective server and machine which can parallel act as a non dedicated server and a client machine or it can be made to switch and act as client or server.

  8. A Web Server and Mobile App for Computing Hemolytic Potency of Peptides

    Science.gov (United States)

    Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C.; Raghava, Gajendra P. S.

    2016-03-01

    Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., “FKK”, “LKL”, “KKLL”, “KWK”, “VLK”, “CYCR”, “CRR”, “RFC”, “RRR”, “LKKL”) are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).

  9. Cambridge-Cranfield High Performance Computing Facility (HPCF) purchases ten Sun Fire(TM) 15K servers to dramatically increase power of eScience research

    CERN Multimedia

    2002-01-01

    "The Cambridge-Cranfield High Performance Computing Facility (HPCF), a collaborative environment for data and numerical intensive computing privately run by the University of Cambridge and Cranfield University, has purchased 10 Sun Fire(TM) 15K servers from Sun Microsystems, Inc.. The total investment, which includes more than $40 million in Sun technology, will dramatically increase the computing power, reliability, availability and scalability of the HPCF" (1 page).

  10. WPS-based technology for client-side remote sensing data processing

    Directory of Open Access Journals (Sweden)

    E. Kazakov

    2015-04-01

    that the processing servers could play the role of the clients connecting to the service supply server. The study was partially supported by Russian Foundation for Basic Research (RFBR, research project No. 13-05-12079 ofi_m.

  11. [Current internet technology for gynecology--from hypertext transfer protocol to embedded web server].

    Science.gov (United States)

    Seufert, R; Woernle, F

    2000-01-01

    The scientific and commercial use of the internet has caused a revolution in information technologies and has influenced medical communication and documentation. Web browsers are the becoming universal starting point for all kinds of client-server applications. Many commercial and medical systems--such as information reservation systems--are being shifted towards web-based systems. This paper describes new techniques. Security problems are the main topics for further developments in medical computing.

  12. Enhanced delegated computing using coherence

    Science.gov (United States)

    Barz, Stefanie; Dunjko, Vedran; Schlederer, Florian; Moore, Merritt; Kashefi, Elham; Walmsley, Ian A.

    2016-03-01

    A longstanding question is whether it is possible to delegate computational tasks securely—such that neither the computation nor the data is revealed to the server. Recently, both a classical and a quantum solution to this problem were found [C. Gentry, in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing (Association for Computing Machinery, New York, 2009), pp. 167-178; A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, 2009), pp. 517-526]. Here, we study the first step towards the interplay between classical and quantum approaches and show how coherence can be used as a tool for secure delegated classical computation. We show that a client with limited computational capacity—restricted to an XOR gate—can perform universal classical computation by manipulating information carriers that may occupy superpositions of two states. Using single photonic qubits or coherent light, we experimentally implement secure delegated classical computations between an independent client and a server, which are installed in two different laboratories and separated by 50 m . The server has access to the light sources and measurement devices, whereas the client may use only a restricted set of passive optical devices to manipulate the information-carrying light beams. Thus, our work highlights how minimal quantum and classical resources can be combined and exploited for classical computing.

  13. GlusterFS One Storage Server to Rule Them All

    Energy Technology Data Exchange (ETDEWEB)

    Boyer, Eric B. [Los Alamos National Laboratory; Broomfield, Matthew C. [Los Alamos National Laboratory; Perrotti, Terrell A. [Los Alamos National Laboratory

    2012-07-30

    GlusterFS is a Linux based distributed file system, designed to be highly scalable and serve many clients. Some reasons to use GlusterFS are: No centralized metadata server, Scalability, Open Source, Dynamic and live service modifications, Can be used over Infiniband or Ethernet, Can be tuned for speed and/or resilience and Flexible administration. It's useful for enterprise environments - virtualization; high performance computing (HPC) and it works with Mac, Linux and Windows clients. Conclusions are: (1) GlusterFS proved to have widespread capabilities as a virtual file system; (2) Scalability is very dependent upon the underlying hardware; (3) Lack of built-in encryption and security paradigm; and (4) Best suited in a general purpose computing environment.

  14. Server-side Filtering and Aggregation within a Distributed Environment

    Science.gov (United States)

    Currey, J. C.; Bartle, A.

    2015-12-01

    Intercalibration, validation, and data mining use cases require more efficient access to the massive volumes of observation data distributed across multiple agency data centers. The traditional paradigm of downloading large volumes of data to a centralized server or desktop computer for analysis is no longer viable. More analysis should be performed within the host data centers using server-side functions. Many comparative analysis tasks require far less than 1% of the available observation data. The Multi-Instrument Intercalibration (MIIC) Framework provides web services to find, match, filter, and aggregate multi-instrument observation data. Matching measurements from separate spacecraft in time, location, wavelength, and viewing geometry is a difficult task especially when data are distributed across multiple agency data centers. Event prediction services identify near coincident measurements with matched viewing geometries near orbit crossings using complex orbit propagation and spherical geometry calculations. The number and duration of event opportunities depend on orbit inclinations, altitude differences, and requested viewing conditions (e.g., day/night). Event observation information is passed to remote server-side functions to retrieve matched data. Data may be gridded, spatially convolved onto instantaneous field-of-views, or spectrally resampled or convolved. Narrowband instruments are routinely compared to hyperspectal instruments such as AIRS and CRIS using relative spectral response (RSR) functions. Spectral convolution within server-side functions significantly reduces the amount of hyperspectral data needed by the client. This combination of intelligent selection and server-side processing significantly reduces network traffic and data to process on local servers. OPeNDAP is a mature networking middleware already deployed at many of the Earth science data centers. Custom OPeNDAP server-side functions that provide filtering, histogram analysis (1D

  15. The X-Files Investigating Alien Performance in a Thin-client World

    CERN Document Server

    Gunther, N J

    2000-01-01

    Many scientific applications use the X11 window environment; an open source windows GUI standard employing a client/server architecture. X11 promotes: distributed computing, thin-client functionality, cheap desktop displays, compatibility with heterogeneous servers, remote services and administration, and greater maturity than newer web technologies. This paper details the author's investigations into close encounters with alien performance in X11-based seismic applications running on a 200-node cluster, backed by 2 TB of mass storage. End-users cited two significant UFOs (Unidentified Faulty Operations) i) long application launch times and ii) poor interactive response times. The paper is divided into three major sections describing Close Encounters of the 1st Kind: citings of UFO experiences, the 2nd Kind: recording evidence of a UFO, and the 3rd Kind: contact and analysis. UFOs do exist and this investigation presents a real case study for evaluating workload analysis and other diagnostic tools.

  16. UniTree Name Server internals

    Energy Technology Data Exchange (ETDEWEB)

    Mecozzi, D.; Minton, J.

    1996-01-01

    The UniTree Name Server (UNS) is one of several servers which make up the UniTree storage system. The Name Server is responsible for mapping names to capabilities Names are generally human readable ASCII strings of any length. Capabilities are unique 256-bit identifiers that point to files, directories, or symbolic links. The Name Server implements a UNIX style hierarchical directory structure to facilitate name-to-capability mapping. The principal task of the Name Server is to manage the directories which make up the UniTree directory structure. The principle clients of the Name Server are the FTP Daemon, NFS and a few UniTree utility routines. However, the Name Server is a generalized server and will accept messages from any client. The purpose of this paper is to describe the internal workings of the UniTree Name Server. In cases where it seems appropriate, the motivation for a particular choice of algorithm as description of the algorithm itself will be given.

  17. WMS Server 2.0

    Science.gov (United States)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  18. A Newer User Authentication, File encryption and Distributed Server Based Cloud Computing security architecture

    Directory of Open Access Journals (Sweden)

    Kawser Wazed Nafi

    2012-10-01

    Full Text Available The cloud computing platform gives people the opportunity for sharing resources, services and information among the people of the whole world. In private cloud system, information is shared among the persons who are in that cloud. For this, security or personal information hiding process hampers. In this paper we have proposed new security architecture for cloud computing platform. This ensures secure communication system and hiding information from others. AES based file encryption system and asynchronous key system for exchanging information or data is included in this model. This structure can be easily applied with main cloud computing features, e.g. PaaS, SaaS and IaaS. This model also includes onetime password system for user authentication process. Our work mainly deals with the security system of the whole cloud computing platform.

  19. HTML thin client and transactions

    CERN Document Server

    Touchette, J F

    1999-01-01

    When writing applications for thin clients such as Web browsers, you face several challenges that do not exist with fat-client applications written in Visual Basic, Delphi, or Java. For one thing, your development tools do not include facilities for automatically building reliable, nonrepeatable transactions into applications. Consequently, you must devise your own techniques to prevent users from transmitting duplicate transactions. The author explains how to implement reliable, nonrepeatable transactions using a technique that is applicable to any Java Server Development Kit based architecture. Although the examples presented are based on the IBM WebSphere 2.1 Application Server, they do not make use of any IBM WebSphere extensions. In short, the concepts presented here can be implemented in Perl CGI and ASP scripts, and the sample code has been tested with JDK 1.1.6 and 1.2. (0 refs).

  20. Microsoft Windows Server Administration Essentials

    CERN Document Server

    Carpenter, Tom

    2011-01-01

    The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t

  1. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  2. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided.

  3. Mobile Agent-Based Secure Task Partitioning and Allocation Algorithm for Cloud & Client Computing%一种基于移动Agent的云端计算任务安全分割与分配算法

    Institute of Scientific and Technical Information of China (English)

    徐小龙; 程春玲; 熊婧夷; 王汝传

    2011-01-01

    In order to protect the privacy of the task in the cloud & client computing environment and prevent the malicious nodes or the competitors from prying into the internal logic and objectives of the task, a mobile Agent-based secure task partitioning and allocation algorithm for cloud & client computing is proposed. The new algorithm takes into account the cloud computing cluster server nodes and user terminals nodes together, divides task into a number of appropriate sub-tasks, and utilizes mobile Agent to carry the code and data of sub-tasks to the suitable nodes in accordance with the corresponding task allocation for implementation. The result of developed prototype system shows that, under the protection of the algorithm, the malicious terminal node looking into the code and data of the sub-task assigned to it or even co-attacking the system still can not understand the overall workflow and final objective of the task.%为了保障云端计算环境中任务的计算私密性,防止恶意节点或竞争对手窥探任务的内部逻辑及实现目标,提出一种新颖的基于移动Agent的云端计算安全任务分割与分配算法.算法同时考虑集群服务器节点和用户终端节点的计算能力与各自特点,将任务合理地切分为若干子任务,采用移动Agent来携带子任务的代码和数据部署到适当的任务执行节点上执行.结合实验原型系统对该算法进行性能分析,结果表明其可有效地保障执行子任务的终端节点,即使窥探到分配给它的代码和数据,甚至协同攻击系统,也无法了解该任务的整体执行逻辑和总体目标等.

  4. Crysalis: an integrated server for computational analysis and design of protein crystallization.

    Science.gov (United States)

    Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I; Lin, Donghai; Song, Jiangning

    2016-02-24

    The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/.

  5. PiRaNhA: a server for the computational prediction of RNA-binding residues in protein sequences

    Science.gov (United States)

    Murakami, Yoichi; Spriggs, Ruth V.; Nakamura, Haruki; Jones, Susan

    2010-01-01

    The PiRaNhA web server is a publicly available online resource that automatically predicts the location of RNA-binding residues (RBRs) in protein sequences. The goal of functional annotation of sequences in the field of RNA binding is to provide predictions of high accuracy that require only small numbers of targeted mutations for verification. The PiRaNhA server uses a support vector machine (SVM), with position-specific scoring matrices, residue interface propensity, predicted residue accessibility and residue hydrophobicity as features. The server allows the submission of up to 10 protein sequences, and the predictions for each sequence are provided on a web page and via email. The prediction results are provided in sequence format with predicted RBRs highlighted, in text format with the SVM threshold score indicated and as a graph which enables users to quickly identify those residues above any specific SVM threshold. The graph effectively enables the increase or decrease of the false positive rate. When tested on a non-redundant data set of 42 protein sequences not used in training, the PiRaNhA server achieved an accuracy of 85%, specificity of 90% and a Matthews correlation coefficient of 0.41 and outperformed other publicly available servers. The PiRaNhA prediction server is freely available at http://www.bioinformatics.sussex.ac.uk/PIRANHA. PMID:20507911

  6. Research on Distributed Computing%分布式计算方法的研究

    Institute of Scientific and Technical Information of China (English)

    黎远松

    2001-01-01

    针对Client/Server体系结构的不足,分析研究了分布式体系结构,提出了基于分布式体系结构的应用解决方案。%The architecture of client/server is not powerful enough, thearchitecture of distributed computing is analyzed, and a new method is put forward based on the architecture of distributed computing.

  7. Demonstration of blind quantum computing.

    Science.gov (United States)

    Barz, Stefanie; Kashefi, Elham; Broadbent, Anne; Fitzsimons, Joseph F; Zeilinger, Anton; Walther, Philip

    2012-01-20

    Quantum computers, besides offering substantial computational speedups, are also expected to preserve the privacy of a computation. We present an experimental demonstration of blind quantum computing in which the input, computation, and output all remain unknown to the computer. We exploit the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server. Various blind delegated computations, including one- and two-qubit gates and the Deutsch and Grover quantum algorithms, are demonstrated. The client only needs to be able to prepare and transmit individual photonic qubits. Our demonstration is crucial for unconditionally secure quantum cloud computing and might become a key ingredient for real-life applications, especially when considering the challenges of making powerful quantum computers widely available.

  8. Computational Science and Engineering Online (CSE-Online): a cyber-infrastructure for scientific computing.

    Science.gov (United States)

    Truong, Thanh N; Nayak, Manohar; Huynh, Hung H; Cook, Tom; Mahajan, Priya; Tran, LeThuy T; Bharath, Jannu; Jain, Shrish; Pham, Ha B; Boonyasiriwat, Chaiwoot; Nguyen, Nhat; Andersen, Evan; Kim, Yong; Choe, Suengkeol; Choi, Jihoon; Cheatham, Thomas E; Facelli, Julio C

    2006-01-01

    With the expansion of the Internet and World Wide Web (or the Web), research environments have changed dramatically. As a result, the need to be able to efficiently and securely access information and resources from remote computer systems is becoming even more critical. This paper describes the development of an extendable integrated Web-accessible simulation environment for computational science and engineering called Computational Science and Engineering Online (CSE-Online; http://cse-online.net). CSE-Online is based on a unique client-server software architecture that can distribute the workload between the client and server computers in such a way as to minimize the communication between the client and server, thus making the environment less-sensitive to network instability. Furthermore, the new software architecture allows the user to access data and resources on one or more remote servers as well as on the computing grid while having the full capability of the Web-services collaborative environment. It can be accessed anytime and anywhere from a Web browser connected to the network by either a wired or wireless connection. It has different modes of operations to support different working environments and styles. CSE-Online is evolving into middleware that can provide a framework for accessing and managing remote data and resources including the computing grid for any domain, not necessarily just within computational science and engineering.

  9. A Java Thick Client User Interface for Grid Processing

    Science.gov (United States)

    Hesselroth, T.

    2005-12-01

    A user interface (CAPRI) which is configurable at runtime has been developed which allows application features to be maintained and upgraded on a central server, available to users without the need for reinstalling software. The user interface is specified by an XML file accessed through a URL and parsed by the open-source SWIX library, which returns a completely laid-out container with the application's controls. A set of generic model-view-controller-actions classes are also instantiated by the CAPRI package based on parsing of the input XML file. Hierarchical relationships present in the XML file are reflected in membership relationships among the classes. An event-driven architecture with a central event handler allows for convenient extensibility. Client/server software is based on the Java Web Services package with SOAP message passing. The server has access to data and computing resources and brokers the requested computation. Sun Grid Engine software is used to manage the cluster of processing nodes. This application has been deployed at the Spitzer Science Center to allow rapid interactive processing of science data.

  10. Server-Based Data Push Architecture for Multi-Processor Environments

    Institute of Scientific and Technical Information of China (English)

    Xian-He Sun; Surendra Byna; Yong Chen

    2007-01-01

    Data access delay is a major bottleneck in utilizing current high-end computing (HEC) machines. Prefetching, where data is fetched before CPU demands for it, has been considered as an effective solution to masking data access delay. However, current client-initiated prefetching strategies, where a computing processor initiates prefetching instructions, have many limitations. They do not work well for applications with complex, non-contiguous data access patterns. While technology advances continue to increase the gap between computing and data access performance,trading computing power for reducing data access delay has become a natural choice. In this paper, we present a serverbased data-push approach and discuss its associated implementation mechanisms. In the server-push architecture, a dedicated server called Data Push Server (DPS) initiates and proactively pushes data closer to the client in time. Issues,such as what data to fetch, when to fetch, and how to push are studied. The SimpleScalar simulator is modified with a dedicated prefetching engine that pushes data for another processor to test DPS based prefetching. Simulation results show that L1 Cache miss rate can be reduced by up to 97% (71% on average) over a superscalar processor for SPEC CPU2000 benchmarks that have high cache miss rates.

  11. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan;

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  12. Reviews of computing technology: Software overview

    Energy Technology Data Exchange (ETDEWEB)

    Hartshorn, W.R.; Johnson, A.L.

    1994-01-05

    The Savannah River Site Computing Architecture states that the site computing environment will be standards-based, data-driven, and workstation-oriented. Larger server systems deliver needed information to users in a client-server relationship. Goals of the Architecture include utilizing computing resources effectively, maintaining a high level of data integrity, developing a robust infrastructure, and storing data in such a way as to promote accessibility and usability. This document describes the current storage environment at Savannah River Site (SRS) and presents some of the problems that will be faced and strategies that are planned over the next few years.

  13. A distributed computing tool for generating neural simulation databases.

    Science.gov (United States)

    Calin-Jageman, Robert J; Katz, Paul S

    2006-12-01

    After developing a model neuron or network, it is important to systematically explore its behavior across a wide range of parameter values or experimental conditions, or both. However, compiling a very large set of simulation runs is challenging because it typically requires both access to and expertise with high-performance computing facilities. To lower the barrier for large-scale model analysis, we have developed NeuronPM, a client/server application that creates a "screen-saver" cluster for running simulations in NEURON (Hines & Carnevale, 1997). NeuronPM provides a user-friendly way to use existing computing resources to catalog the performance of a neural simulation across a wide range of parameter values and experimental conditions. The NeuronPM client is a Windows-based screen saver, and the NeuronPM server can be hosted on any Apache/PHP/MySQL server. During idle time, the client retrieves model files and work assignments from the server, invokes NEURON to run the simulation, and returns results to the server. Administrative panels make it simple to upload model files, define the parameters and conditions to vary, and then monitor client status and work progress. NeuronPM is open-source freeware and is available for download at http://neuronpm.homeip.net . It is a useful entry-level tool for systematically analyzing complex neuron and network simulations.

  14. Analysis of practical backoff protocols for contention resolution with multiple servers

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, L.A. [Univ. of Warwick, Coventry (United Kingdom); MacKenzie, P.D. [Sandia National Lab., Albuquerque, NM (United States)

    1996-12-31

    Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associated with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.

  15. Practical Client Puzzle from Repeated Squaring

    NARCIS (Netherlands)

    Jeckmans, A.

    2009-01-01

    Cryptographic puzzles have been proposed by Merkle [15] to relay secret information between parties over an insecure channel. Client puzzles, a type of cryptographic puzzle, have been proposed by Juels and Brainard [8] to defend a server against denial of service attacks. However there is no general

  16. Joint source-channel rate allocation and client clustering for scalable multistream IPTV.

    Science.gov (United States)

    Chakareski, Jacob

    2015-08-01

    We design a system framework for streaming scalable internet protocol television (IPTV) content to heterogenous clients. The backbone bandwidth is optimally allocated between source and parity data layers that are delivered to the client population. The assignment of stream layers to clients is done based on their access link data rate and packet loss characteristics, and is part of the optimization. We design three techniques for jointly computing the optimal number of multicast sessions, their respective source and parity rates, and client membership, either exactly or approximatively, at lower complexity. The latter is achieved via an iterative coordinate descent algorithm that only marginally underperforms relative to the exact analytic solution. Through experiments, we study the advantages of our framework over common IPTV systems that deliver the same source and parity streams to every client. We observe substantial gains in video quality in terms of both its average value and standard deviation over the client population. In addition, for energy efficiency, we propose to move the parity data generation part to the edge of the backbone network, where each client connects to its IPTV stream. We analytically study the conditions under which such an approach delivers energy savings relative to the conventional case of source and parity data generation at the IPTV streaming server. Finally, we demonstrate that our system enables more consistent streaming performance, when the clients' access link packet loss distribution is varied, relative to the two baseline methods used in our investigation, and maintains the same performance as an ideal system that serves each client independently.

  17. Network characteristics for server selection in online games

    Science.gov (United States)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  18. Classification-based Multi-client Video Transmission over Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Bo Li

    2013-08-01

    Full Text Available These Real-time video streaming over networks operates under stringent network resource constraints, with multiple video clients competing for limited network resources. In this paper, we study the problem of bandwidth allocation for video transmission over heterogeneous networks, with multiple video clients connecting to the video server simultaneously and demanding for the video services, and aim to provide the best possible Quality of Service (QoS under limited bandwidth of both the video server and multiple video clients. We propose a classification-based approach for multi-client video transmission over heterogeneous networks (CMVT. Firstly, the video server detects the available bandwidth of multiple video clients and classifies the clients into different classes. Secondly, the limited export bandwidth of the server is allocated to different video clients using the client classification results and greedy algorithm. Finally, the video server transmits video streams to video clients in different classes through Unicast and clients in the same class through Unicast and forwarding. Experimental results demonstrate that the proposed video transmission method can use the network bandwidth efficiently and provide better video quality to more video clients

  19. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  20. Exploring IBM eServer zSeries and S/390 servers see why IBM's most powerful computer family has become more popular than ever!

    CERN Document Server

    Hoskins, Jim

    2002-01-01

    Considered the bible of the IBM zSeries and S/390 world, this new edition closely examines the role large computers will play in the new century. All the new hardware models and operating system products?Linux, VSE, MVS, VM, AIX, and Open Edition?are now available for the zSeries and are fully explained, as are critical business issues such as cost justification, lease versus purchase, support, security, and maintenance. Hypothetical small, medium, and large businesses are described and then outfitted with the appropriate zSeries solution.

  1. Moving PB Client/Server Application to Three(N) Tier Architecture%PB传统两层客户机/服务器系统到EAServer三(N)层结构的迁移

    Institute of Scientific and Technical Information of China (English)

    王春平; 段隆振

    2003-01-01

    介绍了如何利用Sybase公司的企业应用服务器EAServer(Enterprise Application Server)和Web应用开发框架PBWF(PowerBuilder Web Framework)和组件技术将原有两层客户机/服务器结构的系统转变成三层(或者三层以上)结构.

  2. Client Centred Design

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Nielsen, Janni; Tweddell Levinsen, Karin

    2004-01-01

    Abstract In this paper the Human Computer Interaction (HCI) Research Group reports on the pre-phase of an e-learning project, which was carried out in collaboration with the client. The project involved an initial exploration of the problem spaces, possibilities and challenges for an online accre......-users,) then it is possible to build on existing resources within the client organisation, leading to grounding of design decisions and a match between the e-learning environment designed and the capabilities of the client.......Abstract In this paper the Human Computer Interaction (HCI) Research Group reports on the pre-phase of an e-learning project, which was carried out in collaboration with the client. The project involved an initial exploration of the problem spaces, possibilities and challenges for an online...... accredited Continued Medical Education (CME) programme at the Lundbeck Institute. The CME programme aims at end-users, which are primarily general practitioners, but also specialists (psychiatrist and psychologists), from all over the world. The assumption was that it would be possible to identify and build...

  3. ARQUITETURA E PROTOCOLO PARA APLICAÇÃO VISUAL THIN CLIENT

    Directory of Open Access Journals (Sweden)

    Vanius Roberto Bittencourt

    2011-12-01

    Full Text Available In this paper are studied different solutions to an increasingly common need today - to run applications thin client visuals. To solve common problems is provided a proposed open architecture and protocol for client-server application with visual thin client on windows.

  4. Experimental Demonstration of Blind Quantum Computing

    CERN Document Server

    Barz, Stefanie; Broadbent, Anne; Fitzsimons, Joseph F; Zeilinger, Anton; Walther, Philip

    2011-01-01

    Quantum computers, besides offering substantial computational speedups, are also expected to provide the possibility of preserving the privacy of a computation. Here we show the first such experimental demonstration of blind quantum computation where the input, computation, and output all remain unknown to the computer. We exploit the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server. We demonstrate various blind delegated computations, including one- and two-qubit gates and the Deutsch and Grover algorithms. Remarkably, the client only needs to be able to prepare and transmit individual photonic qubits. Our demonstration is crucial for future unconditionally secure quantum cloud computing and might become a key ingredient for real-life applications, especially when considering the challenges of making powerful quantum computers widely available.

  5. DYNAMIC REQUEST DISPATCHING ALGORITHM FOR WEB SERVER CLUSTER

    Institute of Scientific and Technical Information of China (English)

    Yang Zhenjiang; Zhang Deyun; Sun Qindong; Sun Qing

    2006-01-01

    Distributed architectures support increased load on popular web sites by dispatching client requests transparently among multiple servers in a cluster. Packet Single-Rewriting technology and client address hashing algorithm in ONE-IP technology which can ensure application-session-keep have been analyzed, an improved request dispatching algorithm which is simple, effective and supports dynamic load balance has been proposed. In this algorithm, dispatcher evaluates which server node will process request by applying a hash function to the client IP address and comparing the result with its assigned identifier subset; it adjusts the size of the subset according to the performance and current load of each server, so as to utilize all servers' resource effectively. Simulation shows that the improved algorithm has better performance than the original one.

  6. Delay decomposition at a single server queue with constant service time and multiple inputs. [Waiting time on computer network

    Science.gov (United States)

    Ziegler, C.; Schilling, D. L.

    1977-01-01

    Two networks consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self delay and interference delay.

  7. clientes surdos

    Directory of Open Access Journals (Sweden)

    Wiliam César Alves Machado

    2015-01-01

    Full Text Available Objetivo: identificar cómo profesionales de la unidad municipal de rehabilitación física se comunican con personas sordas que buscan atención especializada. Métodos: estudio exploratorio, descriptivo, de enfoque cualitativo, realizado con 32 profesionales que trabajan en rehabilitación física a través de instrumento auto aplicable. Resultados: del análisis de datos, surgieron dos categorías temáticas: Usando el Lenguaje Brasileño de Signos; Improvisación de Estrategias de comunicación para interactuar con clientes sordos. Improvisación de estrategias utilizadas por los profesionales para comunicarse con personas sordas pueden causar barreras que inciden negativamente en la calidad de los servicios prestados a esta población. Conclusión: la comunicación es deficiente, e iniciativas eficaces centradas en la cualificación de los profesionales que trabajan en el ámbito de la rehabilitación, pueden contribuir para que puedan dominar el Lenguaje Brasileño de Signos, garantizándose a los clientes sordos la atención adecuada, conforme a lo dispuestos para personas sin discapacidad auditiva.

  8. Network Congestion Control in 4G Technology Through Iterative Server

    Directory of Open Access Journals (Sweden)

    Khaleel Ahmad

    2012-07-01

    Full Text Available During the last few decades, mobile communication has developed rapidly. The increasing dependency of people on telecommunication resources is pushing even more current technological developments in the mobile world. In Real-time multimedia applications, such as Live TV or live movie, video conferencing, VoIP, on-line gaming etc. are exciting applications to the success of 4G.In todays Internet these applications are not subject to congestion control, therefore the growth of popularity of these applications may endanger the stability of the Internet. In this paper, we propose a novel model to solve the network congestion problem through iterative server. In this model, when a client send a request to server then server will generate a individual iterative server for requesting client. After completing the request, the iterative server will be automatically destroyed.

  9. Measurement-only verifiable blind quantum computing with quantum input verification

    Science.gov (United States)

    Morimae, Tomoyuki

    2016-10-01

    Verifiable blind quantum computing is a secure delegated quantum computing where a client with a limited quantum technology delegates her quantum computing to a server who has a universal quantum computer. The client's privacy is protected (blindness), and the correctness of the computation is verifiable by the client despite her limited quantum technology (verifiability). There are mainly two types of protocols for verifiable blind quantum computing: the protocol where the client has only to generate single-qubit states and the protocol where the client needs only the ability of single-qubit measurements. The latter is called the measurement-only verifiable blind quantum computing. If the input of the client's quantum computing is a quantum state, whose classical efficient description is not known to the client, there was no way for the measurement-only client to verify the correctness of the input. Here we introduce a protocol of measurement-only verifiable blind quantum computing where the correctness of the quantum input is also verifiable.

  10. DYNAMIC REQUEST DISPATCHING ALGORITHM FOR WEB SERVER CLUSTER

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The overall increase in traffic on the WWWcauses a disproportionate increase in client requeststo popular web sites.Site administrators constantlyface the requirement to i mprove server's capacity.Web server cluster is a popular solution.It usesgroup of independent servers that are managed as asingle systemfor higher availability,easier manage-ability and greater scalability.Many web sites haveadopted this solution.Request dispatching[1-2]is one of the core tech-nologies used by parallel web server clusters...

  11. What client?

    DEFF Research Database (Denmark)

    Unterrainer, Walter

    2015-01-01

    but they are most notably creative in generating new forms of financing/enabling public projects as well as getting rewarded their own efforts. It is an ambition of the paper to document common grounds from their different experiences and to make them productive for our educational institutions....... rising pressure for new approaches towards space, urbanization, environmental challenges, technological inventions, transformation of cities and buildings on one hand and the decline in impact, reputation, self-esteem and economy of ´conventional´architectural profession on the other hand. In Asia like...... engage architects, no matter how urgent the problems are. It is the architects who must reverse their understanding of ´clients´, for the sake of these challenges as well as for their own professional future. This started happen very much in contrast to predominant architectural education models: Young...

  12. Using latency as a QoS indicator for global cloud computing services

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Riaz, Tahir; Dubalski, Bozydar

    2013-01-01

    Many globally distributed cloud computing (CC) applications and services running over the Internet, between globally dispersed clients and servers, will require certain levels of QoS in order to deliver and give a sufficiently smooth user experience. This would be essential for real-time streamin...

  13. Sending servers to Morocco

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco.   Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...

  14. Low-Bandwidth and Non-Compute Intensive Remote Identification of Microbes from Raw Sequencing Reads

    DEFF Research Database (Denmark)

    Gautier, Laurent; Lund, Ole

    2013-01-01

    reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate...... this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script......, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc....

  15. A Mechanism Supporting the Client/Server Relationship in the Operating System of Distributed System “THUDS”

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 金兰

    1991-01-01

    This paper presents a distributed operating system modeled as an abstract machine that provides all the distributed processes with the same set of services.The kernel of our operating system supports services which are achieved by a remote procedure call on requests by parallel processes.Therefore,a scheme for solving the client-server relationship is required.In our system there are more than one clients and,at least,a receive would be required for each.Similarly,there are more than one servers such that the send in a client should produce a message that can be received by every server.Consequently,a mechanism well suited for programming multiple-clients/single-server and single-client/multiple-servers interactions is proposed.

  16. Quantum computing on encrypted data.

    Science.gov (United States)

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  17. Blind Quantum Signature with Blind Quantum Computation

    Science.gov (United States)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2016-12-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  18. Blind Quantum Signature with Blind Quantum Computation

    Science.gov (United States)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  19. Dosimetry computer module of the gamma irradiator of ININ; Modulo informatico de dosimetria del irradiador gamma del ININ

    Energy Technology Data Exchange (ETDEWEB)

    Ledezma F, L. E.; Baldomero J, R. [ININ, Gerencia de Sistemas Informaticos, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Agis E, K. A., E-mail: luis.ledezma@inin.gob.mx [Universidad Autonoma del Estado de Mexico, Facultad de Ingenieria, Cerro de Coatepec s/n, Ciudad Universitaria, 50100 Toluca, Estado de Mexico (Mexico)

    2012-10-15

    This work present the technical specifications for the upgrade of the dosimetry module of the computer system of the gamma irradiator of the Instituto Nacional de Investigaciones Nucleares (ININ) whose result allows the integration and consultation of information in industrial dosimetry subject under an outline client-server. (Author)

  20. Research on the Remote Data Collection Based SQL Server

    Institute of Scientific and Technical Information of China (English)

    QI Xiangyang; LIN Shuzhong; CUI Hui; WANG Jiangfeng; SUN Huilai

    2006-01-01

    The remote data collection system based on SQL Server database technology was developed by Visual C++ and SQL Server database technology together. The Client/Server mode was adopted. The system adopted the database search technological-ADO to work out the communication procedure of the server. And the old data of corresponding memory units were upgraded by the new data which gathered from PLC through serial port real time in the database. The Client utilizes the network technology and database technology through queries procedure to access the data information in the database. Thus a large number of relevant data that the production line operated were obtained. The goal of understanding operation conditions of product line was achieved through analysis of these data. This system has been debugged by the experiment successfully.

  1. Research on message resource optimization in computer supported collaborative design

    Institute of Scientific and Technical Information of China (English)

    张敬谊; 张申生; 陈纯; 王波

    2004-01-01

    An adaptive mechanism is presented to reduce bandwidth usage and to optimize the use of computing resources of heterogeneous computer mixes utilized in CSCD to reach the goal of collaborative design in distributed-synchronous mode.The mechanism is realized on a C/S architecture based on operation information sharing. Firstly, messages are aggregated into packets on the client. Secondly, an outgoing-message weight priority queue with traffic adjusting technique is cached on the server. Thirdly, an incoming-message queue is cached on the client. At last, the results of implementing the proposed scheme in a simple collaborative design environment are presented.

  2. Sirocco Storage Server v. pre-alpha 0.1

    Energy Technology Data Exchange (ETDEWEB)

    2015-12-18

    Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.

  3. Blind quantum computing with weak coherent pulses.

    Science.gov (United States)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-18

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ϵ blindness for UBQC, in analogy to the concept of ϵ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ϵ-blind UBQC for any ϵ>0, even if the channel between the client and the server is arbitrarily lossy.

  4. Blind Quantum Computing with Weak Coherent Pulses

    Science.gov (United States)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-01

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ɛ blindness for UBQC, in analogy to the concept of ɛ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ɛ-blind UBQC for any ɛ>0, even if the channel between the client and the server is arbitrarily lossy.

  5. Gclust Server: 80545 [Gclust Server

    Lifescience Database Archive (English)

    Full Text Available 80545 SCE_YDL246C=SOR2 Cluster Sequences Related Sequences(311) 357 Protein of unknown function, computation...sequences Related Sequences(311) Sequence length 357 Representative annotation Protein of unknown function, computation

  6. A Collaborative Digital Pathology System for Multi-Touch Mobile and Desktop Computing Platforms

    KAUST Repository

    Jeong, W.

    2013-06-13

    Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server system that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch-enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain-specific image-stack compression method that leverages real-time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in-depth user study. Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server systems that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  7. Miniaturized Airborne Imaging Central Server System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance-computer-based electronic backend that...

  8. Miniaturized Airborne Imaging Central Server System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance computer-based electronic backend that...

  9. Supporting Privacy of Computations in Mobile Big Data Systems

    Directory of Open Access Journals (Sweden)

    Sriram Nandha Premnath

    2016-05-01

    Full Text Available Cloud computing systems enable clients to rent and share computing resources of third party platforms, and have gained widespread use in recent years. Numerous varieties of mobile, small-scale devices such as smartphones, red e-health devices, etc., across users, are connected to one another through the massive internetwork of vastly powerful servers on the cloud. While mobile devices store “private information” of users such as location, payment, health data, etc., they may also contribute “semi-public information” (which may include crowdsourced data such as transit, traffic, nearby points of interests, etc. for data analytics. In such a scenario, a mobile device may seek to obtain the result of a computation, which may depend on its private inputs, crowdsourced data from other mobile devices, and/or any “public inputs” from other servers on the Internet. We demonstrate a new method of delegating real-world computations of resource-constrained mobile clients using an encrypted program known as the garbled circuit. Using the garbled version of a mobile client’s inputs, a server in the cloud executes the garbled circuit and returns the resulting garbled outputs. Our system assures privacy of the mobile client’s input data and output of the computation, and also enables the client to verify that the evaluator actually performed the computation. We analyze the complexity of our system. We measure the time taken to construct the garbled circuit as well as evaluate it for varying number of servers. Using real-world data, we evaluate our system for a practical, privacy preserving search application that locates the nearest point of interest for the mobile client to demonstrate feasibility.

  10. Microsoft SQL Server Reporting Services Recipes for Designing Expert Reports

    CERN Document Server

    Turley, Paul

    2010-01-01

    Learn to design more effective and sophisticated business reports. While most users of SQL Server Reporting Services are now comfortable designing and building simple reports, business today demands increasingly complex reporting. In this book, top Reporting Services design experts have contributed step-by-step recipes for creating various types of reports. Written by well-known SQL Server Reporting Services experts, this book gives you the tools to meet your clients' needs: SQL Server Reporting Services enables you to create a wide variety of reports; This guide helps you customize reports fo

  11. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  12. Code Execution Security Mechanism for Open Cloud & Client Computing%开放云端计算环境中的任务执行代码安全机制

    Institute of Scientific and Technical Information of China (English)

    徐小龙; 耿卫建; 杨庚; 王汝传

    2012-01-01

    The cloud & client computing can take full aggregation of network server-side and edge node computing resources of Internet to gain greater benefits. However, deploying tasks to terminal nodes would bring the corresponding security risks at the same time. The behaviors of terminal nodes belonging to different users are clearly not reliable, which means the computing security is difficult to guarantee. One of these security risks is that a terminal node working as the task executor may tamper with the program or data of the task, and return the fake result,or pry into the code and data with privacy requirement. This paper presented a new code protection mechanism based on encryption function with verification code meeting integrity and privacy both, which makes it possible to effectively verify the correctness of returned results and to guarantee the code not be spied. In order to improve the success rate of task implementation further and reduce job cycle time, tasks ought to be distributed to those nodes with good reputations and high success rate of task implementation to execute. This paper proposed the credibility evaluation of node,described the work procedure of the code protection mechanism and gave the analysis and verification of the security performance of the system in detail.%云端计算可以充分聚合Internet网络服务器端和边缘终端节点的计算资源来获得更大的效益.但将计算任务部署到用户终端上执行却带来了安全隐患.分属于不同用户的海量终端节点之行为显然不可靠,计算安全性也难以保障.特别是作为任务执行者的用户终端节点可能篡改任务中的程序代码或数据,返回的是虚假的结果,或是窥探有私密性要求的代码和数据.提出一种新的基于内嵌验证码的加密函数的代码保护机制,它可同时满足计算完整性和私密性,能够有效验证返回结果的正确性,并保障计算代码不被窥知.为了进一步提高任务

  13. Warm Server

    Directory of Open Access Journals (Sweden)

    Manisha Bahir

    2014-04-01

    Full Text Available Educational Organization is largest growing industries in India and all over the world. The advent of modern technologies at the beginning of the last century has brought in development of various technologies, which has substantially increased computer uses in Educational Organization. This system is used monitor and track organizations resources like computers hardware and software resources used in private network of organization. This system monitor all the resources available in intranet, if one of the resources get out dated it can be tracked using system and after it will be upgrade or if any resource changed it also be tracked. Today, India ranks Tenth in worldwide in computer users in educational Institution and other organization. This application helps system admin to monitor all the information about computer software and hardware resource immediately.

  14. E-mail security: mail clients must use encrypted protocols

    CERN Multimedia

    2006-01-01

    In the coming weeks, users of mail clients other than Outlook (e.g. Pine, Mozilla, Mac Mail, etc.) may receive an e-mail from Mail-service@cern.ch with instructions to update the security settings of their mail client. The aim of this campaign is to enforce the use of encrypted and authenticated mail protocols in order to prevent the propagation of viruses and protect passwords from theft. As a first step, from 6 June 2006 onwards, access to mail servers from outside CERN will require a securely configured mail client as described in the help page http://cern.ch/mmmservices/Help/?kbid=191040. On this page most users will also find tools that will update their mail client settings automatically. Note that Outlook clients and WebMail access are not affected. The Mail Team

  15. Rancang Bangun Layanan Cloud Computing Berbasis IaaS Menggunakan Virtualbox

    Directory of Open Access Journals (Sweden)

    Muhammad Faizal Afriansyah

    2015-01-01

    Full Text Available Today, the growth of technology is very fast and a lot of technology that can facilitate users in helping their activity is made. A technology need a server to store both systems and users data. More users needed more servers to store user data. The server room became full and needed the extra space so that it required high cost to build the server and the server space itself. The purpose of this research is to create an IaaS based server virtualization that is connected to a router, switch, virtual client and administrator with VirtualBox application. The purpose of this research can be achieved by designing an appropriate research’s methodology. There are 5 stages of implementation/methodology in building virtualization server at this research which are the system definition, requirements specification, system configuration, system testing, and system analysis. First, phase of the definition system is by describing a system of early identification, system requirements and network topology on the implemented system. The second phase is the specification requirements to determine specifications hardware and software. The hardware consists of a computer with resources 8GB RAM and AMD Phenom II X6 as a processor. The software consists of VirtualBox and operating systems. The third stage is the system configuration to declare the source code in application on each server, routers and switches to perform the function of each device. The final stage is system testing and system analysis by checking the system is ready to use and works the best. Results in this research show IaaS-based server virtualization can be connected to display a web page on all clients through virtual switches and routers on a computer.

  16. GeoServer cookbook

    CERN Document Server

    Iacovella, Stefano

    2014-01-01

    This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.

  17. ENHANCING THE IMPREGNABILITY OF LINUX SERVERS

    Directory of Open Access Journals (Sweden)

    Rama Koteswara Rao G

    2014-03-01

    Full Text Available Worldwide IT industry is experiencing a rapid shift towards Service Oriented Architecture (SOA. As a response to the current trend, all the IT firms are adopting business models such as cloud based services which rely on reliable and highly available server platforms. Linux servers are known to be highly secure. Network security thus becomes a major concern to all IT organizations offering cloud based services. The fundamental form of attack on network security is Denial of Service. This paper focuses on fortifying the Linux server defence mechanisms resulting in an increase in reliability and availability of services offered by the Linux server platforms. To meet this emerging scenario, most of the organizations are adopting business models such as cloud computing that are dependant on reliable server platforms. Linux servers are well ahead of other server platforms in terms of security. This brings network security to the forefront of major concerns to an organization. The most common form of attacks is a Denial of Service attack. This paper focuses on mechanisms to detect and immunize Linux servers from DoS .

  18. Mastering Lync Server 2010

    CERN Document Server

    Winters, Nathan

    2012-01-01

    An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on

  19. On Markovian multi-class, multi-server queueing

    NARCIS (Netherlands)

    Harten, van A.; Sleptchenko, A.

    2003-01-01

    Multi-class multi-server queueing problems are a generalisation of the well-known M/M/k queue to arrival processes with clients of N types that require exponentially distributed service with different average service times. In this paper, we give a procedure to construct exact solutions of the stati

  20. Green Cloud Computing Platform on Micro-server Cluster Architecture%微服务器集群架构的绿色云计算平台

    Institute of Scientific and Technical Information of China (English)

    伍康文; 柴华

    2013-01-01

    With the development and popularization of computer and network technology, IDC, as a unit with lots of computers and networks centralized installation and operation, has become an independent industry with a complete business model. Because in the IDC, there were thousands of computers running in a sealed environment, cost of power consumption of IT equipments and cooling air conductions accounted for more than 50% of total operation cost of the IDC. Andthe huge power consumption limited scale of the IDC construction. So a green energy saving machine room has become an inevitable choice for today’s IDC. At present, most of reducing power consumption measures are taken to external facilities, but not to servers which are a major part of the power consumption. The paper proposed a complete IDC solution based on reducing the server power consumption, solar main power supply and optimizing distribution system. Thenthe article discussed its principle and proposed Irene technical standards and Irene room construction specifications. Finally the green performance was validated through testing results and commissioning situation.%  随着计算机和网络技术的发展与普及,数据中心作为计算机和网络集中安装运行的单位,已经成为一个独立的行业并具有完整的业务模式。由于数据中心是将成千上万的计算机集中在密封的环境中运行, IT 设备以及降温空调的耗电成本占数据中心运营成本的50%以上。巨大的耗电量也限制了数据中心的建设规模。因此绿色节能机房成为了当前数据中心的必然选择。目前,降低能耗的方法大部份停留在外围设施上,没有触及核心耗电的服务器部分。从降低服务器能耗出发,以太阳能为主供电源,再到优化配电系统,提出完整的数据中心建设解决方案,论述其原理并提出了“绿云技术”的技术标准和“绿云房”建设规范。最后通过测试结果

  1. Preconsult interactive computer-assisted client assessment survey for common mental disorders in a community health centre: a randomized controlled trial

    Science.gov (United States)

    Ahmad, Farah; Lou, Wendy; Shakya, Yogendra; Ginsburg, Liane; Ng, Peggy T.; Rashid, Meb; Dinca-Panaitescu, Serban; Ledwos, Cliff; McKenzie, Kwame

    2017-01-01

    Background: Access disparities for mental health care exist for vulnerable ethnocultural and immigrant groups. Community health centres that serve these groups could be supported further by interactive, computer-based, self-assessments. Methods: An interactive computer-assisted client assessment survey (iCCAS) tool was developed for preconsult assessment of common mental disorders (using the Patient Health Questionnaire [PHQ-9], Generalized Anxiety Disorder 7-item [GAD-7] scale, Primary Care Post-traumatic Stress Disorder [PTSD-PC] screen and CAGE [concern/cut-down, anger, guilt and eye-opener] questionnaire), with point-of-care reports. The pilot randomized controlled trial recruited adult patients, fluent in English or Spanish, who were seeing a physician or nurse practitioner at the partnering community health centre in Toronto. Randomization into iCCAS or usual care was computer generated, and allocation was concealed in sequentially numbered, opaque envelopes that were opened after consent. The objectives were to examine the interventions' efficacy in improving mental health discussion (primary) and symptom detection (secondary). Data were collected by exit survey and chart review. Results: Of the 1248 patients assessed, 190 were eligible for participation. Of these, 148 were randomly assigned (response rate 78%). The iCCAS (n = 75) and usual care (n = 72) groups were similar in sociodemographics; 98% were immigrants, and 68% were women. Mental health discussion occurred for 58.7% of patients in the iCCAS group and 40.3% in the usual care group (p ≤ 0.05). The effect remained significant while controlling for potential covariates (language, sex, education, employment) in generalized linear mixed model (GLMM; adjusted odds ratio [OR] 2.2; 95% confidence interval [CI] 1.1-4.5). Mental health symptom detection occurred for 38.7% of patients in the iCCAS group and 27.8% in the usual care group (p > 0.05). The effect was not significant beyond potential

  2. Blind Quantum Computation

    CERN Document Server

    Arrighi, P; Arrighi, Pablo; Salvail, Louis

    2003-01-01

    We investigate the possibility of having someone carry out the work of executing a function for you, but without letting him learn anything about your input. Say Alice wants Bob to compute some well-known function f upon her input x, but wants to prevent Bob from learning anything about x. The situation arises for instance if client Alice has limited computational resources in comparison with mistrusted server Bob, or if x is an inherently mobile piece of data. Could there be a protocol whereby Bob is forced to compute f(x) "blindly", i.e. without observing x? We provide such a blind computation protocol for the class of functions which admit an efficient procedure to generate random input-output pairs, e.g. factorization. The setting is quantum, the security is unconditional, the eavesdropper is as malicious as can be. Keywords: Secure Circuit Evaluation, Secure Two-party Computation, Information Hiding, Information gain vs disturbance.

  3. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  4. JavaTech, an Introduction to Scientific and Technical Computing with Java

    Science.gov (United States)

    Lindsey, Clark S.; Tolliver, Johnny S.; Lindblad, Thomas

    2010-06-01

    Preface; Acknowledgements; Part I. Introduction to Java: 1. Introduction; 2. Language basics; 3. Classes and objects in Java; 4. More about objects in Java; 5. Organizing Java files and other practicalities; 6. Java graphics; 7. Graphical user interfaces; 8. Threads; 9. Java input/output; 10. Java utilities; 11. Image handling and processing; 12. More techniques and tips; Part II. Java and the Network: 13. Java networking basics; 14. A Java web server; 15. Client/server with sockets; 16. Distributed computing; 17. Distributed computing - the client; 18. Java remote method invocation (RMI); 19. CORBA; 20. Distributed computing - putting it all together; 21. Introduction to web services and XML; Part III. Out of the Sandbox: 22. The Java native interface (JNI); 23. Accessing the platform; 24. Embedded Java; Appendices; Index.

  5. Running Servers around Zero Degrees

    OpenAIRE

    PervilÀ, Mikko; Kangasharju, Jussi

    2010-01-01

    Data centers are a major consumer of electricity and a significant fraction of their energy use is devoted to cooling the data center. Recent prototype deployments have investigated the possibility of using outside air for cooling and have shown large potential savings in energy consumption. In this paper, we push this idea to the extreme, by running servers outside in Finnish winter. Our results show that commercial, off-the-shelf computer equipment can tolerate extreme conditions such as ou...

  6. Reducing client waiting time.

    Science.gov (United States)

    1992-01-01

    This first issues of Family Planning (FP) Manager focuses on how to analyze client waiting time and reduce long waits easily and inexpensively. Client flow analysis can be used by managers and staff to identify organizational factors affecting waiting time. Symptoms of long waiting times are overcrowded waiting rooms, clients not returning for services, staff complaints about rushing and waiting, and hurried counseling sessions. Client satisfaction is very important in order to retain FP users. Simple procedures such as routing return visits differently can make a difference in program effectiveness. Assessment of the number of first visits, the number of revisits, and types of methods and services that the clinic provides is a first step. Client flow analysis involves assigning a number to each client on registration, attaching the client flow form to the medical chart, entering the FP method and type of visit, asking staff to note the time at each station, and summarizing data in a master chart. The staff should be involved in plotting data for each client to show waiting versus staff contact time through the use of color coding for each type of staff contact. Bottlenecks become very visible when charted. The amount of time spent at each station can be measured, and gaps in client's contact with staff can be identified. An accurate measure of total waiting time can be obtained. A quick assessment can be made by recording arrival and departure times for each client in one morning or afternoon of a peak day. The procedure is to count the number of clients waiting at 15-minute intervals. The process should be repeated every 3-6 months to observe changes. If waiting times appear long, a more thorough assessment is needed on both a peak and a typical day. An example is given of a completed chart and graph of results with sample data. Managers need to set goals for client flow, streamline client routes, and utilize waiting time wisely by providing educational talks

  7. Effective Computing Server Application Based on OPC%基于OPC框架的高效计算服务应用

    Institute of Scientific and Technical Information of China (English)

    张琦; 张春平; 杨志; 刘铭

    2016-01-01

    大数据计算是当前云计算研究的热点之一。在电力信息化、精益化的建设过程中,业务复杂度不断提高,数据量与日俱增,这使得传统的数据加工性能日益劣化。在复杂的业务场景下,由于海量的电力数据,使得数据指标加工计算的效率非常低下,传统方式的加工任务经常耗时数个小时,难以满足用户的体验要求。为了解决这个问题,全面提升数据指标加工任务效率,基于对象化并行计算(Objectification Parallel Computing, OPC)框架实现了一种高效计算服务, OPC是分布式并行内存计算框架。在OPC框架中,大数据集被拆分成小数据集,并分布式地存储在集群内存中。 OPC计算任务借鉴了分而治之和归并树的思想,将计算任务分成两个阶段:本地计算任务和计算结果收集汇总。计算任务基于本地计算数据进行计算,得到本地计算结果,然后将计算结果通过收集结点进行汇总合并,最后得到最终结果。 OPC框架技术应用在国家电网公司工程生产管理系统(PMS)中,应用效果表明该技术稳定、可靠,性能提升几十至数百倍,可满足高效计算需求。%The big data computing is one of the researches focus in the Cloud Computing nowadays. With the development of electric power information, the business complexity continues to increase and the amount of data is increasing quickly which makes the traditional way of computing be worse and worse. This paper provides an effective computing of big data base on the Objectification Parallel Computing (OPC) to solve the above challenges. The small data set is split from Big Data, is distributedly stored in memory of OPC Cluster. In the effective compute server, making use of the thought of divide and rule and tree merging, there are two stages. The first stage is local data calculate. The intermediate calculation result can be obtained. The second stage is multistage summarizing. The final result

  8. TRAP: A Three-Way Handshake Server for TCP Connection Establishment

    Directory of Open Access Journals (Sweden)

    Fu-Hau Hsu

    2016-11-01

    Full Text Available Distributed denial of service attacks have become more and more frequent nowadays. In 2013, a massive distributed denial of service (DDoS attack was launched against Spamhaus causing the service to shut down. In this paper, we present a three-way handshaking server for Transmission Control Protocol (TCP connection redirection utilizing TCP header options. When a legitimate client attempted to connect to a server undergoing an SYN-flood DDoS attack, it will try to initiate a three-way handshake. After it has successfully established a connection, the server will reply with a reset (RST packet, in which a new server address and a secret is embedded. The client can, thus, connect to the new server that only accepts SYN packets with the corrected secret using the supplied secret.

  9. Low-bandwidth and non-compute intensive remote identification of microbes from raw sequencing reads.

    Directory of Open Access Journals (Sweden)

    Laurent Gautier

    Full Text Available Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet, perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc.

  10. FireDetective: Understanding Ajax Client/Server Interactions

    NARCIS (Netherlands)

    Matthijssen, N.; Zaidman, A.

    2011-01-01

    Ajax-enabled web applications are a new breed of highly interactive, highly dynamic web applications. Although Ajax allows developers to create rich web applications, Ajax applications can be difficult to comprehend and thus to maintain. FireDetective aims to facilitate the understanding of Ajax app

  11. Location Privacy Techniques in Client-Server Architectures

    Science.gov (United States)

    Jensen, Christian S.; Lu, Hua; Yiu, Man Lung

    A typical location-based service returns nearby points of interest in response to a user location. As such services are becoming increasingly available and popular, location privacy emerges as an important issue. In a system that does not offer location privacy, users must disclose their exact locations in order to receive the desired services. We view location privacy as an enabling technology that may lead to increased use of location-based services.

  12. Connecting traces: understanding client-server interactions in Ajax applications

    NARCIS (Netherlands)

    Matthijssen, N.; Zaidman, A.; Storey, M.; Bull, I.; Van Deursen, A.

    2010-01-01

    Ajax-enabled web applications are a new breed of highly interactive, highly dynamic web applications. Although Ajax allows developers to create rich web applications, Ajax applications can be difficult to comprehend and thus to maintain. For this reason, we have created FireDetective, a tool that us

  13. Server Technology – Web Based Service Oriented Architecture for Mobile Augmented Reality System

    Directory of Open Access Journals (Sweden)

    Jatin Dilipkumar Shah

    2012-11-01

    Full Text Available Server Technology stands for lots of technology in mind like Microsoft, Sun Java, IBM, Open Source and many more. In mobile augmentation, server plays very important role to augment the data. Responsibility of the server is to collect the data , mixed virtual data with real data and these data sent back to client on Remote device at Remote place In this paper we briefly discuss about the server technology for web based Service oriented, also the processing software required for augmentation, it’s software technology, how they accept input from various types of devices and generated output data of various types like audio, video, 3-D graphics.

  14. Improvements to the National Transport Code Collaboration Data Server

    Science.gov (United States)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  15. Development of Client Environments for a Synchronization System based on Events; Desarrollo de Entornos Cliente para un Sistema de Sincronizacion Basado en Eventos

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, A.; Vega, J.

    2005-07-01

    The Asynchronous Event Distribution System (AEDS) was built to provides synchronization resources within the TJ-II local area network. It is a software system developed to add soft synchronization capabilities to the TJ-II data acquisition, control and analysis environments Soft synchronization signifies that AEDS is not a realtime system. In fact, AEDS is based on TCP/IP over ETHERNET networks. However, its response time is adequate for practical purposes when synchronization requirements can support some delay between event dispatch and message reception. Event broadcasters (or synchronization servers in AEDS terminology) are Windows computers. Destination computers (or synchronization clients) were also Windows machines in the first version of AEDS. However, this fact imposed a very important limitation on synchronization capabilities. to overcome this situation, synchronization clients for different environments have been added to AEDS: time-sharing operating systems (UNIX and LINUX), real-time operating systems (OS9 and VxWorks) and Java applications. The synchronization primitives that operate in these systems are very different between them and therefore, several approaches were chosen in order to provide the same functionality to the various environments. POSLX thread library with its basic synchronization primitives (mutex and conditions variables) was used to accomplish this task on UNIX/LINUX systems, IPC mechanisms for concurrent processes on OS9 and VxWorks real time operating systems, and synchronized-wait/notify primitives on Java virtual machines. (Author) 11 refs.

  16. CERN servers go to Mexico

    CERN Multimedia

    Stefania Pandolfi

    2015-01-01

    On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico.   CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...

  17. Interactive client side data visualization with d3.js

    Science.gov (United States)

    Rodzianko, A.; Versteeg, R.; Johnson, D. V.; Soltanian, M. R.; Versteeg, O. J.; Girouard, M.

    2015-12-01

    Geoscience data associated with near surface research and operational sites is increasingly voluminous and heterogeneous (both in terms of providers and data types - e.g. geochemical, hydrological, geophysical, modeling data, of varying spatiotemporal characteristics). Such data allows scientists to investigate fundamental hydrological and geochemical processes relevant to agriculture, water resources and climate change. For scientists to easily share, model and interpret such data requires novel tools with capabilities for interactive data visualization. Under sponsorship of the US Department of Energy, Subsurface Insights is developing the Predictive Assimilative Framework (PAF): a cloud based subsurface monitoring platform which can manage, process and visualize large heterogeneous datasets. Over the last year we transitioned our visualization method from a server side approach (in which images and animations were generated using Jfreechart and Visit) to a client side one that utilizes the D3 Javascript library. Datasets are retrieved using web service calls to the server, returned as JSON objects and visualized within the browser. Users can interactively explore primary and secondary datasets from various field locations. Our current capabilities include interactive data contouring and heterogeneous time series data visualization. While this approach is very powerful and not necessarily unique, special attention needs to be paid to latency and responsiveness issues as well as to issues as cross browser code compatibility so that users have an identical, fluid and frustration-free experience across different computational platforms. We gratefully acknowledge support from the US Department of Energy under SBIR Award DOE DE-SC0009732, the use of data from the Lawrence Berkeley National Laboratory (LBNL) Sustainable Systems SFA Rifle field site and collaboration with LBNL SFA scientists.

  18. Mitigating App-DDoS Attacks on Web Servers

    Directory of Open Access Journals (Sweden)

    Manisha M. Patil

    2011-07-01

    Full Text Available In this paper, a lightweight mechanism is proposed to mitigate session flooding and request flooding app-DDoS attacks on web servers. App-DDoS attack is Application layer Distributed Denial of Service attack. This attack prevents legitimate users from accessing services. Numbers of mechanisms are available and can be installed on routers and firewalls to mitigate network layer DDoS attacks like SYNflood attack, ping of death attack. But Network layer solution is not applicable because App-DDoS attacks are indistinguishable based on packets and protocols. A lightweight mechanism is proposed which uses trust to differentiate legitimate users and attackers. Trust to client is evaluated based on his visiting history and requests are scheduled in decreasing order of trust. In this mechanism trust information is stored at client side in the form of cookies. This mitigation mechanism can be implemented as a java package which can run separately and forward valid requests to server. This mechanism also mitigates request flooding attacks by using Client Puzzle Protocol. When server is under request flooding attack source throttling is done by imposing cost on client. Cost is collected in terms of CPU cycles.

  19. The UK Human Genome Mapping Project online computing service.

    Science.gov (United States)

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  20. EPICS Channel Access Server for LabVIEW

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-01

    It can be challenging to interface National Instruments LabVIEW (http://www.ni.com/labview/) with EPICS (http://www.aps.anl.gov/epics/). Such interface is required when an instrument control program was developed in LabVIEW but it also has to be part of global control system. This is frequently useful in big accelerator facilities. The Channel Access Server is written in LabVIEW, so it works on any hardware/software platform where LabVIEW is available. It provides full server functionality, so any EPICS client can communicate with it.

  1. Microsoft® Exchange Server 2007 Administrator's Companion

    CERN Document Server

    Glenn, Walter; Maher, Joshua

    2009-01-01

    Get your mission-critical messaging and collaboration systems up and running with the essential guide to deploying and managing Exchange Server 2007, now updated for SP1. This comprehensive administrator's reference covers the full range of server and client deployments, unified communications, security features, performance optimization, troubleshooting, and disaster recovery. It also includes four chapters on security policy, tools, and techniques to help protect messaging systems from viruses, spam, and phishing. Written by expert authors Walter Glenn and Scott Lowe, this reference deliver

  2. A cancellable and fuzzy fingerprint scheme for mobile computing security

    Science.gov (United States)

    Yang, Wencheng; Xi, Kai; Li, Cai

    2012-09-01

    Fingerprint recognition provides an effective user authentication solution for mobile computing systems. However, as a fingerprint template protection scheme, fingerprint fuzzy vault is subject to cross-matching attacks, since the same finger might be registered for various applications. In this paper, we propose a fingerprint-based biometric security scheme named the cancellable and fuzzy fingerprint scheme, which combines a cancellable non-linear transformation with the client/server version of fuzzy vault, to address the cross-matching attack in a mobile computing system. Experimental results demonstrate that our scheme can provide reliable and secure protection to the mobile computing system while achieving an acceptable matching performance.

  3. Design and Implementation VOIP Service on Open IMS and Asterisk Servers Interconnected through Enum Server

    CERN Document Server

    Munadi, Rendy; Mulyana, Asep; M, R Rumani; 10.5121/ijngn.2010.2201

    2010-01-01

    Asterisk and Open IMS use SIP signal protocol to enable both of them can be connected. To facilitate both relationships, Enum server- that is able to translate the numbering address such as PSTN (E.164) to URI address (Uniform Resource Identifier)- can be used. In this research, we interconnect Open IMS and Asterisk server Enum server. We then analyze the server performance and PDD (Post Dial Delay) values resulted by the system. As the result of the experiment, we found that, for a call from Open IMS user to analog Asterisk telephone (FXS) with a arrival call each servers is 30 call/sec, the maximum PDD value is 493.656 ms. Open IMS is able to serve maximum 30 call/s with computer processor 1.55 GHz, while the Asterisk with computer processor 3.0 GHz, may serve up to 55 call/sec. Enum on server with 1.15 GHz computer processor have the capability of serving maximum of 8156 queries/sec.

  4. Improvement of AMGA Python Client Library for Belle II Experiment

    Science.gov (United States)

    Kwak, Jae-Hyuck; Park, Geunchul; Huh, Taesang; Hwang, Soonwook

    2015-12-01

    This paper describes the recent improvement of the AMGA (ARDA Metadata Grid Application) python client library for the Belle II Experiment. We were drawn to the action items related to library improvement after in-depth discussions with the developer of the Belle II distributed computing system. The improvement includes client-side metadata federation support in python, DIRAC SSL library support as well as API refinement for synchronous operation. Some of the improvements have already been applied to the AMGA python client library as bundled with the Belle II distributed computing software. The recent mass Monte- Carlo (MC) production campaign shows that the AMGA python client library is reliably stable.

  5. CERN servers donated to Ghana

    CERN Multimedia

    CERN Bulletin

    2012-01-01

    Cutting-edge research requires a constantly high performance of the computing equipment. At the CERN Computing Centre, computers typically need to be replaced after about four years of use. However, while servers may be withdrawn from cutting-edge use, they are still good for other uses elsewhere. This week, 220 servers and 30 routers were donated to the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana.   “KNUST will provide a good home for these computers. The university has also developed a plan for using them to develop scientific collaboration with CERN,” said John Ellis, a professor at King’s College London and a visiting professor in CERN’s Theory Group.  John Ellis was heavily involved in building the relationship with Ghana, which started in 2006 when a Ghanaian participated in the CERN openlab student programme. Since 2007 CERN has hosted Ghanaians especially from KNUST in the framework of the CERN Summer Student Progr...

  6. Campaign Consultants - Client Payments

    Data.gov (United States)

    City of San Francisco — Campaign Consultants are required to report ���economic consideration�۝ promised by or received from clients in exchange for campaign consulting services during the...

  7. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  8. Learning Zimbra Server essentials

    CERN Document Server

    Kouka, Abdelmonam

    2013-01-01

    A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.

  9. 在移动计算环境中基于移动代理的缓存失效方案%A Cache Invalidation Scheme Based on Mobile Agent in Mobile Computing Environments

    Institute of Scientific and Technical Information of China (English)

    吴劲; 卢显良; 任立勇

    2003-01-01

    Caching can reduce the bandwidth requirement in a mobile computing environment as well as minimize the energy consumption of mobile hosts. To affirm the validity of mobile host' cache content, servers periodically broadcast cache invalidation reports that contain information of data that has been updated. However, as mobile hosts may operate in sleeping mode (disconnected mode), it is possible that some reports may be missed and the clients are forced to discard the entire cache content. In this paper, we present a cache invalidation scheme base on mobile agent in mobile computing environments, which can manage consistency between mobile hosts and servers, to avoid losing cache invalidation reports.

  10. A Proposed Algorithm to improve security & Efficiency of SSL-TLS servers using Batch RSA decryption

    CERN Document Server

    Pateriya, R K; Shrivastava, S C; Patel, Jaideep

    2009-01-01

    Today, Internet becomes the essential part of our lives. Over 90 percent of the ecommerce is developed on the Internet. A security algorithm became very necessary for producer client transactions assurance and the financial applications safety. The rsa algorithm applicability derives from algorithm properties like confidentiality, safe authentication, data safety and integrity on the internet. Thus, this kind of networks can have a more easy utilization by practical accessing from short, medium, even long distance and from different public places. Rsa encryption in the client side is relatively cheap, whereas, the corresponding decryption in the server side is expensive because its private exponent is much larger. Thus ssl tls servers become swamped to perform public key decryption operations when the simultaneous requests increase quickly .The batch rsa method is useful for such highly loaded web server .In our proposed algorithm by reducing the response time and clients tolerable waiting time an improvement...

  11. Continuous-variable quantum computing on encrypted data

    Science.gov (United States)

    Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.

    2016-12-01

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.

  12. ERDDAP - An Easier Way for Diverse Clients to Access Scientific Data From Diverse Sources

    Science.gov (United States)

    Mendelssohn, R.; Simons, R. A.

    2008-12-01

    ERDDAP is a new open-source, web-based service that aggregates data from other web services: OPeNDAP grid servers (THREDDS), OPeNDAP sequence servers (Dapper), NOS SOAP service, SOS (IOOS, OOStethys), microWFS, DiGIR (OBIS, BMDE). Regardless of the data source, ERDDAP makes all datasets available to clients via standard (and enhanced) DAP requests and makes some datasets accessible via WMS. A client's request also specifies the desired format for the results, e.g., .asc, .csv, .das, .dds, .dods, htmlTable, XHTML, .mat, netCDF, .kml, .png, or .pdf (formats more directly useful to clients). ERDDAP interprets a client request, requests the data from the data source (in the appropriate way), reformats the data source's response, and sends the result to the client. Thus ERDDAP makes data from diverse sources available to diverse clients via standardized interfaces. Clients don't have to install libraries to get data from ERDDAP because ERDDAP is RESTful and resource-oriented: a URL completely defines a data request and the URL can be used in any application that can send a URL and receive a file. This also makes it easy to use ERDDAP in mashups with other web services. ERDDAP could be extended to support other protocols. ERDDAP's hub and spoke architecture simplifies adding support for new types of data sources and new types of clients. ERDDAP includes metadata management support, catalog services, and services to make graphs and maps.

  13. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  14. GenExp: an interactive web-based genomic DAS client with client-side data rendering.

    Directory of Open Access Journals (Sweden)

    Bernat Gel Moreno

    Full Text Available BACKGROUND: The Distributed Annotation System (DAS offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. RESULTS: Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. CONCLUSIONS: GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp.

  15. Simulation of emission tomography using grid middleware for distributed computing.

    Science.gov (United States)

    Thomason, M G; Longton, R F; Gregor, J; Smith, G T; Hutson, R K

    2004-09-01

    SimSET is Monte Carlo simulation software for emission tomography. This paper describes a simple but effective scheme for parallel execution of SimSET using NetSolve, a client-server system for distributed computation. NetSolve (version 1.4.1) is "grid middleware" which enables a user (the client) to run specific computations remotely and simultaneously on a grid of networked computers (the servers). Since the servers do not have to be identical machines, computation may take place in a heterogeneous environment. To take advantage of diversity in machines and their workloads, a client-side scheduler was implemented for the Monte Carlo simulation. The scheduler partitions the total decay events by taking into account the inherent compute-speeds and recent average workloads, i.e., the scheduler assigns more decay events to processors expected to give faster service and fewer decay events to those expected to give slower service. When compute-speeds and sustained workloads are taken into account, the speed-up is essentially linear in the number of equivalent "maximum-service" processors. One modification in the SimSET code (version 2.6.2.3) was made to ensure that the total number of decay events specified by the user is maintained in the distributed simulation. No other modifications in the standard SimSET code were made. Each processor runs complete SimSET code for its assignment of decay events, independently of others running simultaneously. Empirical results are reported for simulation of a clinical-quality lung perfusion study.

  16. Remote Laboratory Java Server Based on JACOB Project

    Directory of Open Access Journals (Sweden)

    Pavol Bisták

    2011-02-01

    Full Text Available Remote laboratories play an important role in the educational process of engineers. This paper deals with the structure of remote laboratories. The principle of the proposed remote laboratory structure is based on the Java server application that communicates with Matlab through the COM technology for the data exchange under the Windows operating system. Java does not support COM directly so the results of the JACOB project are used and modified to cope with this problem. In laboratories for control engineering education a control algorithm usually runs on a PC with Matlab that really controls the real plant. This is the server side described in the paper in details. To demonstrate the possibilities of a remote control a Java client server application is also introduced. It covers communication and offers a user friendly interface for the control of a remote plant and visualization of measured data.

  17. A distributed computing system for multivariate time series analyses of multichannel neurophysiological data.

    Science.gov (United States)

    Müller, Andy; Osterhage, Hannes; Sowa, Robert; Andrzejak, Ralph G; Mormann, Florian; Lehnertz, Klaus

    2006-04-15

    We present a client-server application for the distributed multivariate analysis of time series using standard PCs. We here concentrate on analyses of multichannel EEG/MEG data, but our method can easily be adapted to other time series. Due to the rapid development of new analysis techniques, the focus in the design of our application was not only on computational performance, but also on high flexibility and expandability of both the client and the server programs. For this purpose, the communication between the server and the clients as well as the building of the computational tasks has been realized via the Extensible Markup Language (XML). Running our newly developed method in an asynchronous distributed environment with random availability of remote and heterogeneous resources, we tested the system's performance for a number of different univariate and bivariate analysis techniques. Results indicate that for most of the currently available analysis techniques, calculations can be performed in real time, which, in principle, allows on-line analyses at relatively low cost.

  18. EarthServer: an Intercontinental Collaboration on Petascale Datacubes

    Science.gov (United States)

    Baumann, P.; Rossi, A. P.

    2015-12-01

    With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. Obviously, the goal is to allow users to "ask any question, any time" thereby enabling them to "build their own product on the go".One of the most influential initiatives in Big Geo Data is EarthServer which has demonstrated new directions for flexible, scalable EO services based on innovative NewSQL technology. Researchers from Europe, the US and recently Australia have teamed up to rigourously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. EarthServer has established client and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman, enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS) including the Web Coverage Processing Service (WCPS). Conversely, EarthServer has significantly shaped and advanced the OGC Big Geo Data standards landscape based on the experience gained.Phase 1 of EarthServer has advanced scalable array database technology into 100+ TB services; in phase 2, Petabyte datacubes will be built in Europe and Australia to perform ad-hoc querying and merging. Standing between EarthServer phase 1 (from 2011 through 2014) and phase 2 (from 2015 through 2018) we present the main results and outline the impact on the international standards landscape; effectively, the Big Geo Data standards established through initiative of

  19. Replicated Data Management for Mobile Computing

    CERN Document Server

    Douglas, Terry

    2008-01-01

    Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client-server

  20. Professional SQL Server 2005 administration

    CERN Document Server

    Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong

    2007-01-01

    SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin

  1. Architectural models for client interaction on service-oriented platforms

    NARCIS (Netherlands)

    Bonino da Silva Santos, L.O.; Ferreira Pires, L.; Sinderen, van M.J.; Sinderen, van M.J.

    2007-01-01

    Service-oriented platforms can provide different levels of functionality to the client applications as well as different interaction models. Depending on the platform’s goals and the computing capacity of their expected clients the platform functionality can range from just an interface to support t

  2. La contrainte client

    Directory of Open Access Journals (Sweden)

    Guillaume Tiffon

    2011-04-01

    Full Text Available Cet article montre que le contact client a beau être ambivalent, dans la mesure où il est à la fois source de contrainte et de reconnaissance, dans certains cas, comme celui des caissières, il constitue avant tout une contrainte, en ce que les clients contrôlent le travail qui s’opère « sous leurs yeux », tandis que, dans d’autres cas, comme celui des kinésithérapeutes, il contribue avant tout à donner du sens au travail et, par là, à susciter l’engagement des travailleurs. L’article souligne ainsi combien la contrainte client revêt des modalités différentes selon la configuration, spatiale et temporelle, dans laquelle se déroule la relation de service, et le différentiel de compétences entre les protagonistes engagés dans cette relation.The client constraint. A comparative analysis of cashiers and physiotherapistsThis article shows that despite the ambivalence of client contact, insofar as it is both a source of constraint and recognition, in some cases, as the ones of cashiers, it isprimarily a constraint: clients control the work that takes place “before their eyes”, whereas in other cases – as in the ones of physiotherapists – it contributes to give meaning to work and, thereby, to arouse the commitment of workers. The article highlights how the client constraint takes on different forms depending on thespatial and temporal configuration where the service relation runs, and the skills differential between the protagonists involved in this relation.El apremio de los clientes. Análisis comparativo entre las cajeras de supermercado y los kinesiterapeutasEn este artículo se demuestra que aunque el contacto con los clientes puede ser percibido como agradable, en realidad en la mayoría de los casos el cliente es percibido como un peso puesto que estos « controlan » visualmente el trabajo de las cajeras mientras que en otras profesiones como es el caso de los kinesiterapeutas la presencia del paciente

  3. Computationally Efficient Searchable Symmetric Encryption

    NARCIS (Netherlands)

    Liesdonk, van Peter; Sedghi, Saeed; Doumen, Jeroen; Hartel, Pieter; Jonker, Willem; Jonker, Willem; Petkovic, Milan

    2010-01-01

    Searchable encryption is a technique that allows a client to store documents on a server in encrypted form. Stored documents can be retrieved selectively while revealing as little information as possible to the server. In the symmetric searchable encryption domain, the storage and the retrieval are

  4. Analysis of free SSL/TLS Certificates and their implementation as Security Mechanism in Application Servers.

    Directory of Open Access Journals (Sweden)

    Mario E. Cueva Hurtado

    2017-02-01

    Full Text Available Security in the application layer (SSL, provides the confidentiality, integrity, and authenticity of the data, between two applications that communicate with each other. This article is the result of having implemented Free SSL / TLS Certificates in application servers, determining the relevant characteristics that must have a SSL/TLS certificate, the Certifying Authority generate it. A vulnerability analysis is developed in application servers and encrypted communications channel is established to protect against attacks such as man in the middle, phishing and maintaining the integrity of information that is transmitted between the client and server.

  5. Continuous-variable quantum computing on encrypted data

    DEFF Research Database (Denmark)

    Marshall, Kevin; Jacobsen, Christian Scheffmann; Schäfermeier, Clemens

    2016-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure...... in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables....... We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology...

  6. Secure Two-Party Computation with Low Communication

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Faust, Sebastian;

    2012-01-01

    We propose a 2-party UC-secure protocol that can compute any function securely. The protocol requires only two messages, communication that is poly-logarithmic in the size of the circuit description of the function, and the workload for one of the parties is also only poly-logarithmic in the size...... of the circuit. This implies, for instance, delegatable computation that requires no expensive off-line phase and remains secure even if the server learns whether the client accepts its results. To achieve this, we define two new notions of extractable hash functions, propose an instantiation based...

  7. OPC Server and BridgeView Application for High Voltage Power Supply Lecroy 1458

    CERN Document Server

    Swoboda, D; CERN. Geneva

    2000-01-01

    Abstract The aim of this project was to develop an OPC server to communicate over an RS232 serial line. This communication media is commonly used with commercial instruments. The development was made for a High Voltage power supply in the context of the Alice [1] experiment. In addition, the structured modular concept will allow changing the transmission media or power supply type with little effort. The high voltage power supply should be accessible remotely through a network. OPC[2] is an acronym for OLE[3] for Process Control. OPC is based on the DCOM [3] communication protocol, which allows communication with any computer running a Windows based OS. This standard is widely used in industry to access device data through Windows applications. The concept is based on the client-server architecture. The hardware and the software architecture are described. Subsequently details of the implemented programs are given with emphasis on the possibility to replace parts of the software in order to use differ...

  8. The EarthServer Federation: State, Role, and Contribution to GEOSS

    Science.gov (United States)

    Merticariu, Vlad; Baumann, Peter

    2016-04-01

    The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.

  9. Psychotherapy for Suicidal Clients.

    Science.gov (United States)

    Lester, David

    1994-01-01

    Reviews various systems of psychotherapy for suitability for suicidal clients. Discusses psychoanalysis, cognitive therapy, primal therapy, transactional analysis, Gestalt therapy, reality therapy, person-centered therapy, existential analysis, and Jungian analysis in light of available treatment options. Includes 36 citations. (Author/CRR)

  10. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    Science.gov (United States)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be

  11. Integrated Quantum and Classical Key Scheme for Two Servers Password Authentication

    Directory of Open Access Journals (Sweden)

    A. Krishnan

    2010-01-01

    Full Text Available Problem statement: Traditional user authentication system uses passwords for their secured accessibility in a central server, which is prone to attack by adversaries. The adversaries gain access to the contents of the user in attack prone servers. To overcome this problem, the multi-server systems were being proposed in which the user communicate in parallel with several or all of the servers for the purpose of authentication. Such system requires a large communication bandwidth and needs for synchronization at the user. Approach: Present an efficient two server user password authentication and reduce the usage of communication traffic and bandwidth consumption between the servers. Integration of quantum and classical key exchange model is deployed to safeguard user access security in large networks. The proposed work presented, a two server system, front end service server interacts directly to the user and the back end control server visible to the service server. The performance measure of the user password made for the transformed two long secrets held by both service and control server. Further the proposal applied quantum key distribution model along with classical key exchange in the two server authentication. Three-party Quantum key distribution used in this model, one with implicit user authentication and other with explicit mutual authentication, deployed for ecommerce buyer authentication in internet peer servers. Results: Effect of online and offline dictionary attacks prevailing in the single and multi-server systems are analyzed. The performance efficiency test carried out in terms success rate of authenticity for two server shows 35% better than single server. The performance of integrated Quantum Key Distribution (QKD systems and classical public key model have shown experimentally better performance in terms of computational efficiency and security rounds (11% improvement than traditional cryptic security

  12. Destination Serbia: a new life for CERN’s servers

    CERN Multimedia

    Caroline Duc

    2012-01-01

    In order to ensure the computing performances that CERN's research needs, the Computer Centre has to replace its computers regularly. After Morocco, Ghana and Bulgaria, it's Serbia’s turn to receive a donation of servers from CERN!   CERN Director-General Rolf Heuer and Jovan Puzovic from Belgrade Institute of Physics seeing off the servers on the beginning of their journey to Serbia. On Monday 26 November, CERN donated 130 servers to two Serbian institutions: the Belgrade Institute of Physics and the Petnica Science School. In 2012, 559 computers were donated to institutions in Africa and Europe. Since the mid-2000s, the Computer Centre has changed technology and now have about 10,000 computers that have to be renewed every four to five years. Obsolete for the purposes of CERN's cutting-edge research, these computers are still suitable for less demanding applications. Jovan Puzovic, Belgrade Institute of Physics team leader for the NA61 experiment (SHINE), an...

  13. Client-side Skype forensics: an overview

    Science.gov (United States)

    Meißner, Tina; Kröger, Knut; Creutzburg, Reiner

    2013-03-01

    IT security and computer forensics are important components in the information technology. In the present study, a client-side Skype forensics is performed. It is designed to explain which kind of user data are stored on a computer and which tools allow the extraction of those data for a forensic investigation. There are described both methods - a manual analysis and an analysis with (mainly) open source tools, respectively.

  14. LHCb: Fabric Management with Diskless Servers and Quattor on LHCb

    CERN Multimedia

    Schweitzer, P; Brarda, L; Neufeld, N

    2011-01-01

    Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers: file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.

  15. Kelayakan Raspberry Pi sebagai Web Server: Perbandingan Kinerja Nginx, Apache, dan Lighttpd pada Platform Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Rahmad Dawood

    2014-04-01

    Full Text Available Raspberry Pi is a small-sized computer, but it can function like an ordinary computer. Because it can function like a regular PC then it is also possible to run a web server application on the Raspberry Pi. This paper will report results from testing the feasibility and performance of running a web server on the Raspberry Pi. The test was conducted on the current top three most popular web servers, which are: Apache, Nginx, and Lighttpd. The parameters used to evaluate the feasibility and performance of these web servers were: maximum request and reply time. The results from the test showed that it is feasible to run all three web servers on the Raspberry Pi but Nginx gave the best performance followed by Lighttpd and Apache.Keywords: Raspberry Pi, web server, Apache, Lighttpd, Nginx, web server performance

  16. Modified LRU Algorithm To Implement Proxy Server With Caching Policies

    Directory of Open Access Journals (Sweden)

    Jitendra Singh Kushwah

    2011-11-01

    Full Text Available In order to produce and develop a software system, it is necessary to have a method of choosing a suitable algorithm which satisfies the required quality attributes and maintains a trade-off between sometimes conflicting ones. Proxy server is placed between the real server and clients. Proxy server uses caching policies to store web documents using algorithms. For this, different algorithms are used but drawbacks of these algorithms are that it is applicable only for the video files not for other resource types. Second drawback is that it does not tell any thing about organizing the data on the disk storage of the proxy server. Third drawback is that it is difficult to implement. Fourth drawback is that they require the knowledge about the workloads on the proxy server. Major problems in previous described algorithms is that "Cold Cache Pollution". As described in the previous description all the existing algorithms used for caching suffers from various disadvantages. This paper is proposing a technique to remove the problem of cold cache pollution which is proved mathematically that it is better than the existing LRU-Distance algorithm.

  17. The Competitive Advantage: Client Service.

    Science.gov (United States)

    Leffel, Linda G.; DeBord, Karen B.

    The adult education literature contains a considerable amount of research on and discussion of client service in the marketing process, management and staff roles in service- and product-oriented businesses, and the importance of client service and service quality to survival in the marketplace. By applying the principles of client-oriented…

  18. DNS BIND Server Configuration

    Directory of Open Access Journals (Sweden)

    Radu MARSANU

    2011-01-01

    Full Text Available After a brief presentation of the DNS and BIND standard for Unix platforms, the paper presents an application which has a principal objective, the configuring of the DNS BIND 9 server. The general objectives of the application are presented, follow by the description of the details of designing the program.

  19. PDS: A Performance Database Server

    Directory of Open Access Journals (Sweden)

    Michael W. Berry

    1994-01-01

    Full Text Available The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallel benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.

  20. Beginning SQL Server 2008 Administration

    CERN Document Server

    Walters, R

    2009-01-01

    Beginning SQL Server 2008 Administration is essential for anyone wishing to learn about implementing and managing SQL Server 2008 database. From college students, to experienced database administrators from other platforms, to those already familiar with SQL Server and wanting to fill in some gaps of knowledge, this book will bring all readers up to speed on the enterprise platform Microsoft SQL Server 2008. * Clearly describes relational database concepts* Explains the SQL Server database engine and supporting tools* Shows various database maintenance scenarios What you'll learn* Understand c

  1. Microsoft SQL Server 2012 bible

    CERN Document Server

    Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron

    2012-01-01

    Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p

  2. Lexical Server of Polish Language

    Directory of Open Access Journals (Sweden)

    Marek Gajecki

    2001-01-01

    Full Text Available This paper presents Lexical Server of Polish Language, the tool that aids natural language processing (NLP. Database of the server consists of dictionary units enriched by lexical information. The lexical server should be able to perform identification of word forms and generations of all inflected forms of the word. The server is dedicated to the people who are looking for NLP algorithms or implement them. The algorithms can be implemented in different kinds of programming languages and different operating systems. There are some examples of problems when lexical server can be useful: automatic text correction, tcxt indexing, keywords extraction, text profile building.

  3. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  4. Obtaining the Knowledge of a Server Performance from Non-Intrusively Measurable Metrics

    OpenAIRE

    Satoru Ohta

    2016-01-01

    Most network services are provided by server computers. To provide these services with good quality, the server performance must be managed adequately. For the server management, the performance information is commonly obtained from the operating system (OS) and hardware of the managed computer. However, this method has a disadvantage. If the performance is degraded by excessive load or hardware faults, it becomes difficult to collect and transmit information. Thus, it is necessary to obtain ...

  5. Solid waste information and tracking system server conversion project management plan

    Energy Technology Data Exchange (ETDEWEB)

    MAY, D.L.

    1999-04-12

    The Project Management Plan governing the conversion of Solid Waste Information and Tracking System (SWITS) to a client-server architecture. The Solid Waste Information and Tracking System Project Management Plan (PMP) describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.

  6. On multi-class multi-server queueing and spare parts management

    NARCIS (Netherlands)

    Harten, van Aart; Sleptchenko, Andrei

    2000-01-01

    Multi-class multi-server queuing problems are a generalization of the wellknown M/M/k situation to arrival processes with clients of N types that require exponentially distributed service with different averaged service time. Problems of this sort arise naturally in various applications, such as spa

  7. Extending Binary Large Object Support to Open Grid Services Architecture-Data Access and Integration Middleware Client Toolkit

    Directory of Open Access Journals (Sweden)

    Kiran K. Patnaik

    2011-01-01

    Full Text Available Problem statement: OGSA-DAI middleware allows data resources to be federated and accessed via web services on the web or within grids or clouds. It provides a client API for writing programs that access the exposed databases. Migrating existing applications to the new technology and using a new API to access the data of DBMS with BLOB is difficult and discouraging. A JDBC Driver is a much convenient alternative to existing mechanism and provides an extension to OGSA-DAI middleware and allows applications to use databases exposed in a grid through the OGSA-DAI 3.0. However, the driver does not support Binary Large Objects (BLOB. Approach: The driver is enhanced to support BLOB using the OGSA-DAI Client API. It transforms the JDBC calls into an OGSA-DAI workflow request and sends it to the server using Web Services (WS. The client API of OGSA-DAI uses activities that are connected to form a workflow and executed using a pipeline. This workflow mechanism is embedded into the driver. The WS container dispatches the request to the OGSA-DAI middleware for processing and the result is then transformed back to an instance of ResultSet implementation using the OGSA-DAI Client API, before it is returned to the user. Results: Test on handling of BLOBs (images, flash files and videos ranging from size 1 KB to size 2 GB were carried out on Oracle, MySQL and PostgreSQL databases using our enhanced JDBC driver and it performed well. Conclusion: The enhanced JDBC driver now can offer users, with no experience in Grid computing specifically on OGSA-DAI, the possibility to give their applications the ability to access databases exposed on the grid with minimal effort.

  8. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  9. 媒体资源云化关键技术研究%System Design for Media Server Based on Cloud Computing Technology and the Application Analysis

    Institute of Scientific and Technical Information of China (English)

    李青; 唐哲红; 宋阿芳; 王发光

    2013-01-01

    A media resource pool of multi-tenant cloud-based resource scheduling,load balancing implementation techniques and processes were proposed.Cloud media in the IMS service platform application solutions,cloud technology,media resources test situation were introduced,and cloud media business model was presented.In order to promote cloud media server hardware and software for the local manufacturers and related industry chain development provides a new idea.%提出一种媒体资源池云化中的多租户资源调度、负载均衡实现技术及流程,介绍了云媒体在IMS业务平台中的应用方案、云媒体资源技术的实验情况,并对云媒体商业模式等进行了阐述,为推动本土软硬件云媒体服务器生产商及相关产业链发展提供了新思路.

  10. Design and Implementation of an IP based authentication mechanism for Open Source Proxy Servers in Interception Mode

    Directory of Open Access Journals (Sweden)

    Tejaswi Agarwal

    2013-02-01

    Full Text Available Proxy servers are being increasingly deployed at organizations for performance benefits; however, there still exists drawbacks in ease of client authentication in interception proxy mode mainly for Open Source Proxy Servers. Technically, an interception mode is not designed for client authentication, but implementation in certain organizations does require this feature. In this paper, we focus on the World Wide Web, highlight the existing transparent proxy authentication mechanisms, its drawbacks and propose an authentication scheme for transparent proxy users by using external scripts based on the clients Internet Protocol Address. This authentication mechanism has been implemented and verified on Squid-one of the most widely used HTTP Open Source Proxy Server.

  11. GeneBee-net: Internet-based server for analyzing biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, L.I.; Ivanov, V.V.; Nikolaev, V.K. [Small Scientific Manufacturing Enterprise, Moscow (Russian Federation)] [and others

    1995-08-01

    This work describes a network server for searching databanks of biopolymer structures and performing other biocomputing procedures; it is available via direct Internet connection. Basic server procedures are dedicated to homology (similarity) search of sequence and 3D structure of proteins. The homologies found could be used to build multiple alignments, predict protein and RNA secondary structure, and construct phylogenetic trees. In addition to traditional methods of sequence similarity search, the authors propose {open_quotes}non-matrix{close_quotes} (correlational) search. An analogous approach is used to identify regions of similar tertiary structure of proteins. Algorithm concepts and usage examples are presented for new methods. Service logic is based upon interaction of a client program and server procedures. The client program allows the compilation of queries and the processing of results of an analysis.

  12. A GCM Solution for Leveraging Server-side JMS Functionality to Android-based Trading Application

    Directory of Open Access Journals (Sweden)

    Claudiu VINTE

    2013-01-01

    Full Text Available The paper presents our solution for a message oriented communication mechanism, employing Google Cloud Messaging (GCM on the client-side, and Java Message Service (JMS on the server-side, in order to leverage JMS functionality to Android-based trading application. Our ongoing research has been focused upon conceiving a way to expose the trading services offered by our academic trading system ASETS to a mobile trading application based on Android platform. ASETS trading platform is a distributed SOA implementation, with an original API based on JMS. In order to design and implement an Android based client, able to inter-communicate with the server-side components of ASETS, in a manner consistent with publisher/subscriber JMS communication model, there was particularly necessary to have object embedded messages, produced by various ASETS services, pushed to the client application. While point-to-point communication model could be resolved on the client-side by employing synchronous HTTP socket connections over TCP/IP, the asynchronously generated messages from the server-side had to reach the client application in a push manner.

  13. Implementation of a Secured system with Roaming Server and Roaming Ports

    Directory of Open Access Journals (Sweden)

    R. Bharathi,

    2011-05-01

    Full Text Available The main goal of this paper is to design and implement a secured system against Server hijacking, which leads to Denial of Service (DoS [5] attacks. This system uses more than one server forproviding security. But only one server will be active at a time. The inactive servers act as Roaming Honeypots[9]. The source address of any request that hits a honeypot is recorded and all its future requests are dropped. Thus this system acts as an Intrusion Detection System (IDS. It is impossible to identify the active servers and the honeypots at a given moment even if attackers obtain the identities of all servers. Moreover the UDP/TCP port number used by the server varies as a function of time and a shared secret between the server and the client. This mechanism simplifies both the detection and filtering of malicious packets and it does not require any change to existing protocols. This port hopping[10] or roaming porttechnique is compatible with the UDP and TCP protocols. This system can be implemented in real time successfully.

  14. The web server of IBM's Bioinformatics and Pattern Discovery group

    OpenAIRE

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel,; Shibuya, Tetsuo

    2003-01-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic ...

  15. Proposal and Implementation of SSH Client System Using Ajax

    Science.gov (United States)

    Kosuda, Yusuke; Sasaki, Ryoichi

    Technology called Ajax gives web applications the functionality and operability of desktop applications. In this study, we propose and implement a Secure Shell (SSH) client system using Ajax, independent of the OS or Java execution environment. In this system, SSH packets are generated on a web browser by using JavaScript and a web server works as a proxy in communication with an SSH server to realize end-to-end SSH communication. We implemented a prototype program and confirmed by experiment that it runs on several web browsers and mobile phones. This system has enabled secure SSH communication from a PC at an Internet cafe or any mobile phone. By measuring the processing performance, we verified satisfactory performance for emergency use, although the speed was unsatisfactory in some cases with mobile phone. The system proposed in this study will be effective in various fields of E-Business.

  16. SPEER-SERVER: a web server for prediction of protein specificity determining sites

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J.; Panchenko, Anna R.; Chakrabarti, Saikat

    2012-01-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. PMID:22689646

  17. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2017-01-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  18. Caching Servers for ATLAS

    CERN Document Server

    Gardner, Robert; The ATLAS collaboration

    2016-01-01

    As many Tier 3 and some Tier 2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  19. Detecting and Preventing Security Threats on Servers and Browsers

    Directory of Open Access Journals (Sweden)

    Mr. Nandish. U. G Dr. Balakrishna. R Mr. Naveen. L Mr. Anand Kumar K. S

    2012-02-01

    Full Text Available Our reliance on web based services through browsers for everyday life activities has increased over the years. Everyday new vulnerabilities are found in what was previously believed to be secure applications, unlocking new risks and security hazards that can be exploited by malicious advertisers or intruders compromising the security of systems. Using cross site scripting techniques intruders can hijack web sessions and craft credible phishing sites. Similarly, intruders may harm the server by uploading malicious executables and batch files. On the other hand the java script code downloaded into browser can attack client machines to steal user’s credentials (XSS attacks and lure users into providing sensitive information to unauthorized parties (Phishing attacks. It is proposed here a model detecting and preventing malicious files and cross site scripting attacks based on monitoring JavaScript code execution and comparing the execution to high level policies, to detect malicious code behavior. The solution also protects the servers from dangerous DOS commands and executable files. The model follows an approach similar to hackers and security analyst to discover vulnerabilities in networkconnected web servers. It uses both manually and automatically generated rules to mitigate possible cross site scripting attacks. The work undertaken covers the solutions preventing client machines from stealing user’s credentials by using cookies hijacking as well as preventing the browsers from crash.

  20. A Low-Cost Remote Healthcare Monitor System Based on Embedded Server

    Directory of Open Access Journals (Sweden)

    He Liu

    2013-04-01

    Full Text Available In the paper, we propose a scheme about a low-cost remote healthcare monitor system based on embedded server between home and hospital. In the scheme, we design an embedded server based on an ARM9 microprocessor. The embedded server supplies all kinds of interfaces such as GPIO interfaces, serial interfaces. These interfaces can acquire all kinds of physiology signals such as Electrocardiograph, heart rate, respiration wave, blood pressure, oxygen saturation, body temperature and so on through connecting the sensor modules. The network is based on local area network and adopts the Browser/Server model. Each home with an embedded server is as a server endpoint and the hospital is as a Browser endpoint. Every embedded server owns an independent static internet protocol address. The doctors can easily acquire patients’ physiology information through writing patients’ internet protocol address on any computer browser. The embedded server can store patients’ physiology information using database in an 8 GB SD card. The doctor can download the database information into the local computers. The system can conveniently upgrade all software in the embedded server only on a remote hospital computer. The remote healthcare monitor system based on embedded server has advantages of low-cost, convenience and feasibility.

  1. Contiki NTP Client

    OpenAIRE

    Luštický, Josef

    2012-01-01

    This BSc Thesis was performed during a study stay at the RheinMain University of Applied Sciences in Wiesbaden, Germany. The purpose of this thesis is to describe the operating system Contiki for embedded systems, NTP time synchronisation protocol and to design and implement an NTP client for the Contiki operating system. This BSc Thesis was performed during a study stay at the RheinMain University of Applied Sciences in Wiesbaden, Germany. The purpose of this thesis is to describe the ope...

  2. 基于TCP客户机的计算机监控系统测试软件的设计%Testing Software on Computer Monitor System Based on TCP Client

    Institute of Scientific and Technical Information of China (English)

    马玉春; 汪文彬; 李应勇

    2014-01-01

    With the development of Internet,TCP communication is popularized and widely applied in computer monitor systems. Key technologies of computer monitor systems including encoding and decoding as well as transforming between them,block check of protocols and data sending and receiving via TCP client are analyzed detailed,then a univer-sal and multifunctional testing software on computer monitor system is developed based on them. The testing software can be used as supervisor computer to test slave one in both automatic and manual sending mode. With the help of RS-232/RJ-45 transforming software,remote device or system with RS-232 can be tested too.%随着因特网的发展,TCP通信越来越普及,并广泛应用于计算机监控系统。本文深入分析了计算机监控系统的编码方式及其相互转换技术,通信协议的校验方式以及数据收发技术,在此基础之上设计了一个基于TCP客户机的计算机监控系统测试软件(主控机),既能以自动发送方式又能以手动发送方式对受控机进行测试,借助RS-232与RJ-45协议转换软件,还可以对远程串口设备或系统进行测试。

  3. ISPIDER Central: an integrated database web-server for proteomics.

    Science.gov (United States)

    Siepen, Jennifer A; Belhajjame, Khalid; Selley, Julian N; Embury, Suzanne M; Paton, Norman W; Goble, Carole A; Oliver, Stephen G; Stevens, Robert; Zamboulis, Lucas; Martin, Nigel; Poulovassillis, Alexandra; Jones, Philip; Côté, Richard; Hermjakob, Henning; Pentony, Melissa M; Jones, David T; Orengo, Christine A; Hubbard, Simon J

    2008-07-01

    Despite the growing volumes of proteomic data, integration of the underlying results remains problematic owing to differences in formats, data captured, protein accessions and services available from the individual repositories. To address this, we present the ISPIDER Central Proteomic Database search (http://www.ispider.manchester.ac.uk/cgi-bin/ProteomicSearch.pl), an integration service offering novel search capabilities over leading, mature, proteomic repositories including PRoteomics IDEntifications database (PRIDE), PepSeeker, PeptideAtlas and the Global Proteome Machine. It enables users to search for proteins and peptides that have been characterised in mass spectrometry-based proteomics experiments from different groups, stored in different databases, and view the collated results with specialist viewers/clients. In order to overcome limitations imposed by the great variability in protein accessions used by individual laboratories, the European Bioinformatics Institute's Protein Identifier Cross-Reference (PICR) service is used to resolve accessions from different sequence repositories. Custom-built clients allow users to view peptide/protein identifications in different contexts from multiple experiments and repositories, as well as integration with the Dasty2 client supporting any annotations available from Distributed Annotation System servers. Further information on the protein hits may also be added via external web services able to take a protein as input. This web server offers the first truly integrated access to proteomics repositories and provides a unique service to biologists interested in mass spectrometry-based proteomics.

  4. PONDEROSA-C/S: client–server based software package for automated protein 3D structure determination

    OpenAIRE

    Lee, Woonghee; Stark, Jaime L.; Markley, John L.

    2014-01-01

    Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727–1728. doi:10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nucle...

  5. Enhancing Security in Cloud Computing for Third Party Auditor by Self-destruction Mechanism

    Directory of Open Access Journals (Sweden)

    Muzammil H. Mohammed

    2014-07-01

    Full Text Available The main aim study in cloud computing system, large amount of data can be maintained in the cloud storage system and it can be used for application based services for client. The bulk amount of data privacy will not be properly maintained by the cloud service provider. Without knowledge of authorized client, data can be viewed by another user with the permission of Cloud Service Provider (CSP. Many cryptography technique can be used for data privacy in the TPA (Third Party Auditor which is the trusted authority to audit and verify the integrity in cloud. The cloud loaded data can be viewed by authorized user and copy of data can be in tag based data placed in TPA and data privacy can be affected in TPA system. In the proposed system, data privacy can be maintained in the TPA view by using self destruction mechanism to destroy the data after the view point of data for particular time and then the viewed data copy can be destruction in TPA. The cloud service provider can be securely loading the data in cloud via TPA Server. The main advantage of the self destruction mechanism security for the data in cloud via TPA server without the permission of the particular authenticated client other user cannot viewed the individual client data. Then data privacy can be perfectly maintained in the cloud service.

  6. Cloud Computing, I-Service, And IT Service Provisioning

    Directory of Open Access Journals (Sweden)

    Harry Katzan

    2011-05-01

    Full Text Available Cloud computing is an architecture for providing computing service via the Internet.  Use of the term “cloud” is a metaphor for the representation of the Internet used in most systems diagrams.  In this case, the Internet is the transport mechanism between a client and a server located somewhere in cyberspace, as compared to having computer applications residing on an “on premises” computer.  Adoption of cloud computing practically eliminates two ongoing problems in IT service provisioning: the upfront costs of acquiring computational resources and the time delay of building and deploying software applications.  This paper covers both subjects. 

  7. Tag Based Client Side Detection of Content Sniffing Attacks with File Encryption and File Splitter Technique

    Directory of Open Access Journals (Sweden)

    Syed Imran Ahmed Qadri

    2012-09-01

    Full Text Available In this paper we provide a security framework for server and client side. In this we provide some prevention methods which will apply for the server side and alert replication is also on client side. Content sniffing attacks occur if browsers render non-HTML files embedded with malicious HTML contents or JavaScript code as HTML files. This mitigation effects such as the stealing of sensitive information through the execution of malicious JavaScript code. In this framework client access the data which is encrypted from the server side. From the server data is encrypted using private key cryptography and file is send after splitting so that we reduce the execution time. We also add a tag bit concept which is included for the means of checking the alteration; if alteration performed tag bit is changed. Tag bit is generated by a message digest algorithm. We have implemented our approach in a java based environment that can be integrated in web applications written in various languages.

  8. Adaptively Secure Computationally Efficient Searchable Symmetric Encryption

    NARCIS (Netherlands)

    Sedghi, S.; Liesdonk, van P.; Doumen, J.M.; Hartel, P.H.; Jonker, W.

    2009-01-01

    Searchable encryption is a technique that allows a client to store documents on a server in encrypted form. Stored documents can be retrieved selectively while revealing as little information as possible to the server. In the symmetric searchable encryption domain, the storage and the retrieval are

  9. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft

  10. Overview of Ontology Servers Research

    Directory of Open Access Journals (Sweden)

    Robert M. Colomb

    2007-06-01

    Full Text Available An ontology is increasingly becoming an essential tool for solving problems in many research areas. The ontology is a complex information object. It can contain millions of concepts in complex relationships. When we want to manage complex information objects, we generally turn to information systems technology. An information system intended to manage ontology is called an ontology server. The ontology server technology is at the time of writing quite immature. Therefore, this paper reviews and compares the main ontology servers that have been reported in the literatures. As a result, we point out several research questions related to server technology.

  11. Optimal Configuration of Fault-Tolerance Parameters for Distributed Server Access

    DEFF Research Database (Denmark)

    Daidone, Alessandro; Renier, Thibault; Bondavalli, Andrea

    2013-01-01

    Server replication is a common fault-tolerance strategy to improve transaction dependability for services in communications networks. In distributed architectures, fault-diagnosis and recovery are implemented via the interaction of the server replicas with the clients and other entities such as e...... in replicated server architectures. In order to obtain insight into the system behaviour, a set of relevant environment parameters and controllable fault-tolerance parameters are chosen and the dependability/performance trade-off is evaluated....... such as enhanced name servers. Such architectures provide an increased number of redundancy configuration choices. The influence of a (wide area) network connection can be quite significant and induce trade-offs between dependability and user-perceived performance. This paper develops a quantitative stochastic...

  12. Architecture Research of Non-Stop Computer System

    Institute of Scientific and Technical Information of China (English)

    LIUXinsong; QIUYuanjie; YANGFeng; YANGongjun; GUPan; GAOKe

    2004-01-01

    Distributed & parallel server system with distributed & parallel I/O interface has solved the bottleneck between server system and client system, and also has solved the rebuilding problem after system fault. However, the system still has some shortcomings: the switch is the system bottleneck and the system is not adapted to WAN (Wide area network). Therefore, we put forward a new system architecture to overcome these shortcomings and develop the non-stop computer system. The basis of a non-stop system is rebuilt after system fault. The inner architecture of non-stop system must be redundant and the redundancy is the system fault-tolerance redundancy based on distributed mechanism and not backupredundancy. Analysis and test results declare that the system rebuild time after fault is in second scale and its rebuild capability is so strong that the system can be nonstop in the system's lifetime.

  13. Framework to Solve Load Balancing Problem in Heterogeneous Web Servers

    CERN Document Server

    Sharma, Ms Deepti

    2011-01-01

    For popular websites most important concern is to handle incoming load dynamically among web servers, so that they can respond to their client without any wait or failure. Different websites use different strategies to distribute load among web servers but most of the schemes concentrate on only one factor that is number of requests, but none of the schemes consider the point that different type of requests will require different level of processing efforts to answer, status record of all the web servers that are associated with one domain name and mechanism to handle a situation when one of the servers is not working. Therefore, there is a fundamental need to develop strategy for dynamic load allocation on web side. In this paper, an effort has been made to introduce a cluster based frame work to solve load distribution problem. This framework aims to distribute load among clusters on the basis of their operational capabilities. Moreover, the experimental results are shown with the help of example, algorithm...

  14. A Novel Thin Client Architecture with Hybrid Push-Pull Model, Adaptive Display Pre-Fetching and Graph Colouring

    Directory of Open Access Journals (Sweden)

    Sumalatha.M.R

    2016-06-01

    Full Text Available The advent of cloud computing has driven away the notion of having sophisticated hardware devices for performing computing intensive tasks. This feature is very essential for resource-constrained devices. In mobile cloud computing, it is sufficient that the device be a thin client i.e. which concentrates solely on providing a graphical user interface to the end-user and the processing is done in the cloud. We focus on adaptive display virtualization where the display updates are computed in advance using synchronization techniques and classifying the job as computationally intensive or not based on the complexity of the program and the interaction pattern. Based on application, the next possible key-press is identified and those particular frames are pre-fetched into the local buffer. Based on these two factors, a decision is then made whether to execute the job locally or in the cloud or whether we must take the next frame from the local buffer or pull it from server. Jobs requiring greater interaction are executed locally in the mobile to reduce interaction delay. If a job is to be executed in the cloud, then the results of the processing alone are sent via the network to the device. The parameters are varied in runtime based on network conditions and application parameters to minimise the interaction delay.

  15. Empirical study of sensor observation services server instances

    CERN Document Server

    Tamayo, Alain; Granell, Carlos; Huerta, Joaquín; 10.1007/978-3-642-19789-5_10

    2011-01-01

    The number of Sensor Observation Service (SOS) instances available online has been increasing in the last few years. The SOS specification standardises interfaces and data formats for exchanging sensor-related in-formation between information providers and consumers. SOS in conjunction with other specifications in the Sensor Web Enablement initiative, at-tempts to realise the Sensor Web vision, a worldwide system where sensor networks of any kind are interconnected. In this paper we present an empirical study of actual instances of servers implementing SOS. The study focuses mostly in which parts of the specification are more frequently included in real implementations, and how exchanged messages follows the structure defined by XML Schema files. Our findings can be of practical use when implementing servers and clients based on the SOS specification, as they can be optimized for common scenarios.

  16. Control of a heterogeneous two-server exponential queueing system

    Science.gov (United States)

    Larsen, R. L.; Agrawala, A. K.

    1983-01-01

    A dynamic control policy known as 'threshold queueing' is defined for scheduling customers from a Poisson source on a set of two exponential servers with dissimilar service rates. The slower server is invoked in response to instantaneous system loading as measured by the length of the queue of waiting customers. In a threshold queueing policy, a specific queue length is identified as a 'threshold,' beyond which the slower server is invoked. The slower server remains busy until it completes service on a customer and the queue length is less than its invocation threshold. Markov chain analysis is employed to analyze the performance of the threshold queueing policy and to develop optimality criteria. It is shown that probabilistic control is suboptimal to minimize the mean number of customers in the system. An approximation to the optimum policy is analyzed which is computationally simple and suffices for most operational applications.

  17. SHADE3 server

    DEFF Research Database (Denmark)

    Madsen, Anders Østergaard; Hoser, Anna Agnieszka

    2014-01-01

    A major update of the SHADE server (http://shade.ki.ku.dk) is presented. In addition to all of the previous options for estimating H-atom anisotropic displacement parameters (ADPs) that were offered by SHADE2, the newest version offers two new methods. The first method combines the original...... translation-libration-screw analysis with input from periodic ab initio calculations. The second method allows the user to input experimental information from spectroscopic measurements or from neutron diffraction experiments on related structures and utilize this information to evaluate ADPs of H atoms...

  18. SQL Server Integration Services

    CERN Document Server

    Hamilton, Bill

    2007-01-01

    SQL Server 2005 Integration Services (SSIS) lets you build high-performance data integration solutions. SSIS solutions wrap sophisticated workflows around tasks that extract, transform, and load (ETL) data from and to a wide variety of data sources. This Short Cut begins with an overview of key SSIS concepts, capabilities, standard workflow and ETL elements, the development environment, execution, deployment, and migration from Data Transformation Services (DTS). Next, you'll see how to apply the concepts you've learned through hands-on examples of common integration scenarios. Once you've

  19. PSSweb: protein structural statistics web server.

    Science.gov (United States)

    Gaillard, Thomas; Stote, Roland H; Dejaegere, Annick

    2016-07-01

    With the increasing number of protein structures available, there is a need for tools capable of automating the comparison of ensembles of structures, a common requirement in structural biology and bioinformatics. PSSweb is a web server for protein structural statistics. It takes as input an ensemble of PDB files of protein structures, performs a multiple sequence alignment and computes structural statistics for each position of the alignment. Different optional functionalities are proposed: structure superposition, Cartesian coordinate statistics, dihedral angle calculation and statistics, and a cluster analysis based on dihedral angles. An interactive report is generated, containing a summary of the results, tables, figures and 3D visualization of superposed structures. The server is available at http://pssweb.org.

  20. 电信BS架构系统迁移及云化研究%Research of Browser/Server Architecture System Migration and Implementation of Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    李书生; 段勇; 石屹嵘; 叶宇航

    2012-01-01

    深入分析了电信BS架构系统迁移的相关问题及解决方案,并对BS架构系统IaaS及PaaS云化方案进行了研究,提出了两种IaaS和一种PaaS解决方案,能有效地解决电信BS架构系统Web及应用层云化相关问题,同时给出了迁移及云化演进路径和分阶段推进建议,对于推进云计算在IT支撑系统中的应用,具有较好的价值贡献.%This article analyzes the system migration problem and solution of telecom BS-based systems, and also researches the IaaS and PaaS cloud computing solutions of those systems. Proposed two IaaS program and one PaaS program,it can effectively solve the problem in the implementation of cloud computing in the Web and application layer of telecom BS-based systems. Given the evolution path and the stages recommendations of system migration and implementation of cloud computing, those have good value to the application of cloud computing in the telecom IT support systems.

  1. Group Work with Transgender Clients

    Science.gov (United States)

    Dickey, Lore M.; Loewy, Michael I.

    2010-01-01

    Drawing on the existing literature, the authors' research and clinical experiences, and the first author's personal journey as a member and leader of the transgender community, this article offers a brief history of group work with transgender clients followed by suggestions for group work with transgender clients from a social justice…

  2. Vocational Indecision and Rehabilitation Clients.

    Science.gov (United States)

    Strohmer, Douglas C.; And Others

    1984-01-01

    Assessed the vocational decision-making problems of rehabilitation clients (N=60). Revealed that decision-making problems of clients can be grouped into three areas: employment readiness, self-appraisal, and decision-making readiness. Suggested that vocationally decided and undecided subjects differ significantly in the extent to which they have…

  3. Gestor de citas y clientes

    OpenAIRE

    2015-01-01

    Aplicación web para una consulta de fisioterapia que permite la gestión de las citas y los clientes. Aplicació web per a una consulta de fisioteràpia que permet la gestió de les cites i els clients.

  4. Log-less metadata management on metadata server for parallel file systems.

    Science.gov (United States)

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  5. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  6. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  7. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  8. Savannah River Site computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  9. Optimization environments and the NEOS server

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.; More, J.J. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-03-01

    The authors are interested in the development of problem-solving environments that simplify the formulation of optimization problems, and the access to computational resources. Once the problem has been formulated, the first step in solving an optimization problem in a typical computational environment is to identify and obtain the appropriate piece of optimization software. Once the software has been installed and tested in the local environment, the user must read the documentation and write code to define the optimization problem in the manner required by the software. Typically, Fortran or C code must be written to define the problem, compute function values and derivatives, and specify sparsity patterns. Finally, the user must debug, compile, link, and execute the code. The Network-Enabled Optimization System (NEOS) is an Internet-based service for optimization providing information, software, and problem-solving services for optimization. The main components of NEOS are the NEOS Guide and the NEOS Server. The current version of the NEOS Server is described in Section 2. The authors emphasize nonlinear optimization problems, but NEOS does handle linear and nonlinearly constrained optimization problems, and solvers for optimization problems subject to integer variables are being added. In Section 4 the authors begin to explore possible extensions to the NEOS Server by discussing the addition of solvers for global optimization problems. Section 5 discusses how a remote procedure call (RPC) interface to NEOS addresses some of the limitations of NEOS in the areas of security and usability. The detailed implementation of such an interface raises a number of questions, such as exactly how the RPC is implemented, what security or authentication approaches are used, and what techniques are used to improve the efficiency of the communication. They outline some of the issues in network computing that arise from the emerging style of computing used by NEOS.

  10. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  11. A Popularity-Based on Server-Proxy Caching Strategy for Streaming Media%一种基于Sever和hoxy流媒体流行性的Caching策略

    Institute of Scientific and Technical Information of China (English)

    谭劲; 余胜生; 周敬利

    2003-01-01

    It is expected that by 2003, continuous media will account for more than 50% of the data available on origin servers. This will provoke a significant change in Internet workload, due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proving to be major limiting factors. Aiming at the characteristic of broadband network in a residential area, we propose a popularitybased on server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of this streaming media partially or completely, and plays an important role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the startup latency of the client.

  12. Visualization in a Climate Computing Centre

    Science.gov (United States)

    Meier-Fleischer, Karin; Röber, Niklas; Böttinger, Michael

    2014-05-01

    Today, the extensive numerical simulations of climate models require elaborate visualization for understanding and communicating the results. Typical data sets of climate models are 3-dimensional, multivariate and time dependent, and can hence be very large. Interactive visual data analysis improves and accelerates the comprehension of these vast amounts of data. At DKRZ, the German Climate Computing Centre, a central high end visualization server, various domain specific visualization applications, and a remote 3D rendering solution enable users to interactively visualize their extensive model results right at their desktops. The DKRZ's visualization server is a heterogeneous Linux cluster, currently consisting of 10 state of the art visualization nodes equipped with 96 -256 GB RAM and high end NVidia GPUs. Since the parallel file system of the DKRZ's supercomputer is directly mounted over a powerful network, the model data can directly be analyzed and visualized. VirtualGL and TurboVNC are used for utilizing the server's GPUs for 3D rendering, while the TurboVNC client on the user's local computer continuously displays the resulting video stream. By using this central visualization server instead of a local computer, three main benefits are achieved: Time consuming transfers of large data sets from the supercomputer to the local computer are not needed. The hardware of the user's local workstation doesn't need to be powerful, no expensive GPU is required. Users don't have to install or buy visualization software. On the visualization server, a wide range of visualization software is installed. Avizo Green, a powerful commercial software customized for interactive 3D visualization of climate model data, is available, as well as SimVis and ParaView, which focus more on an exploratory visualization of data. SimVis and ParaView provide techniques like Linking & Brushing to emphasize or de-emphasize portions of the data. Furthermore, some domain specific 2D graphics

  13. Design of Control Server Application Software for Neutral Beam Injection System

    Institute of Scientific and Technical Information of China (English)

    施齐林; 胡纯栋; 盛鹏; 宋士化

    2012-01-01

    For the remote control of a neutral beam injection (NBI) system, a software NBIcsw is developed to work on the control server. It can meet the requirements of data transmission and operation-control between the NBI measurement and control layer (MCL) and the remote monitoring layer (RML). The NBIcsw runs on a Linux system, developed with client/server (C/S) mode and multithreading technology. It is shown through application that the software is with good efficiency.

  14. Host Integration Server 2004

    Institute of Scientific and Technical Information of China (English)

    PaulThurrott; 杨岩

    2005-01-01

    微软发布的Host Integration Server(HIS)2004,是IBM大型主机集成服务器的一个重要的更新,添加了一些重要的新特点和改进。与大多数微软公司协同工作的产品不同,HIS 2004的设计目的是为了移植,而不是纯粹的集成,事实上它将会帮助客户从现有的传统平台中得到更多的价值——在这种情况下,所指的产品就是IBM大型主机和iSeries(也就是以前的AS/400)系列机型。

  15. Universal Fingerprinting Chip Server

    Science.gov (United States)

    Casique-Almazán, Janet; Larios-Serrato, Violeta; Olguín-Ruíz, Gabriela Edith; Sánchez-Vallejo, Carlos Javier; Maldonado-Rodríguez, Rogelio; Méndez-Tenorio, Alfonso

    2012-01-01

    The Virtual Hybridization approach predicts the most probable hybridization sites across a target nucleic acid of known sequence, including both perfect and mismatched pairings. Potential hybridization sites, having a user-defined minimum number of bases that are paired with the oligonucleotide probe, are first identified. Then free energy values are evaluated for each potential hybridization site, and if it has a calculated free energy of equal or higher negative value than a user-defined free energy cut-off value, it is considered as a site of high probability of hybridization. The Universal Fingerprinting Chip Applications Server contains the software for visualizing predicted hybridization patterns, which yields a simulated hybridization fingerprint that can be compared with experimentally derived fingerprints or with a virtual fingerprint arising from a different sample. Availability http://bioinformatica.homelinux.org/UFCVH/ PMID:22829736

  16. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    Science.gov (United States)

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  17. Tag Based Client Side Detection of Content Sniffing Attacks with File Encryption and File Splitter Technique

    Directory of Open Access Journals (Sweden)

    Syed Imran Ahmed Qadri,

    2012-09-01

    Full Text Available In this paper we provide a security framework forserver and clientside. In this we provide someprevention methods which will apply for the serverside and alert replication is also on client side.Content sniffing attacks occur if browsers rendernon-HTML files embedded with malicious HTMLcontents or JavaScript code as HTML files. Thismitigation effects such as the stealing of sensitiveinformation through the execution of maliciousJavaScript code. In this framework client access thedata which is encrypted from the server side. Fromthe server data is encrypted using private keycryptography and file is send after splitting so thatwe reduce the execution time. We also add a tag bitconcept which is included for the means of checkingthe alteration; if alteration performed tag bit ischanged. Tag bit is generated bya message digestalgorithm.We have implemented our approach in ajava based environment that can be integrated inweb applications written in various languages.

  18. Selection of Server-Side Technologies for an E-Business Curriculum

    Science.gov (United States)

    Sandvig, J. Christopher

    2007-01-01

    The rapid growth of e-business and e-commerce has made server-side programming an increasingly important topic in information systems (IS) and computer science (CS) curricula. This article presents an overview of the major features of several popular server-side programming technologies and discusses the factors that influence the selection of…

  19. SQL Server 2014 development essentials

    CERN Document Server

    Masood-Al-Farooq, Basit A

    2014-01-01

    This book is an easy-to-follow, comprehensive guide that is full of hands-on examples, which you can follow to successfully design, build, and deploy mission-critical database applications with SQL Server 2014. If you are a database developer, architect, or administrator who wants to learn how to design, implement, and deliver a successful database solution with SQL Server 2014, then this book is for you. Existing users of Microsoft SQL Server will also benefit from this book as they will learn what's new in the latest version.

  20. Secure Access to Private Services in Intranet for Mobile Clients

    Directory of Open Access Journals (Sweden)

    Li Kuang

    2013-02-01

    Full Text Available With wide adoption of Service Computing and Mobile Computing, people tend to invoke services with mobile devices, requiring accurate and real-time feedback from services at any time and any place. Among these services, some are private to limited users and require identity authorization before use; hence secure access control in wireless network should be provided. To address the challenge, in this study, we propose the architecture and protocols of a system of access to private services for mobile clients, which combines the technologies of trusted computing, Diffie-Hellman key agreement protocol, digital certificate, DES data encryption algorithm and twice verification. We further show the implementation of the proposed system, in which we have realized the authentication and authorization of mobile clients and then secure data transfer between mobile clients in the unsafe Internet and private services in the Intranet.

  1. A Capacity Supply Model for Virtualized Servers

    Directory of Open Access Journals (Sweden)

    Alexander PINNOW

    2009-01-01

    Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.

  2. Windows Server 2012 R2 administrator cookbook

    CERN Document Server

    Krause, Jordan

    2015-01-01

    This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.

  3. WPS-based technology for client-side remote sensing data processing

    Science.gov (United States)

    Kazakov, E.; Terekhov, A.; Kapralov, E.; Panidi, E.

    2015-04-01

    Server-side processing is principal for most of the current Web-based geospatial data processing tools. However, in some cases the client-side geoprocessing may be more convenient and acceptable. This study is dedicated to the development of methodology and techniques of Web services elaboration, which allow the client-side geoprocessing also. The practical objectives of the research are focused on the remote sensing data processing, which are one of the most resource-intensive data types. The idea underlying the study is to propose such geoprocessing Web service schema that will be compatible with the current serveroriented Open Geospatial Consortium standard (OGC WPS standard), and additionally will allow to run the processing on the client, transmitting processing tool (executable code) over the network instead of the data. At the same time, the unity of executable code must be preserved, and the transmitted code should be the same to that is used for server-side processing. This unity should provide unconditional identity of the processing results that performed using of any schema. The appropriate services are pointed by the authors as a Hybrid Geoprocessing Web Services (HGWSs). The common approaches to architecture and structure of the HGWSs are proposed at the current stage as like as a number of service prototypes. For the testing of selected approaches, the geoportal prototype was implemented, which provides access to created HGWS. Further works are conducted on the formalization of platform independent HGWSs implementation techniques, and on the approaches to conceptualization of theirs safe use and chaining possibilities. The proposed schema of HGWSs implementation could become one of the possible solutions for the distributed systems, assuming that the processing servers could play the role of the clients connecting to the service supply server. The study was partially supported by Russian Foundation for Basic Research (RFBR), research project No. 13

  4. Mac OS X Lion Server For Dummies

    CERN Document Server

    Rizzo, John

    2011-01-01

    The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ

  5. THttpServer class in ROOT

    Science.gov (United States)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  6. The SDSS data archive server

    Energy Technology Data Exchange (ETDEWEB)

    Neilsen, Eric H., Jr.; /Fermilab

    2007-10-01

    data reduction pipeline is similar. Each pipeline deposits the results in a collection of files on disk. The Catalog Archive Server (CAS) provides an interface to a database of objects detected through the SDSS along with their properties and observational metadata. This serves the needs of most users, but some users require access to files produced by the pipelines. Some data, including the corrected frames (the pixel data itself corrected for instrumental signatures), the models for the point spread function, and an assortment of quality assurance plots, are not included in the database at all. Sometimes it is simply more convenient for a user to read data from existing files than to retrieve it using database queries. This is often the case, for example, when a user wants to download data a significant fraction of objects in the database. Users might need to perform analysis that requires more computing power than the CAS database servers can reasonably provide, and so need to download the data so that it can be analyzed with local resources. Users can derive observational parameters not measured by the standard SDSS pipeline from the corrected frames, metadata, and other data products, or simply use the output of tools with which they're familiar. The challenge in distributing these data is lies not in the distribution method itself, but in providing tools and support that allow users to find the data they need and interpret it properly. After introducing the data itself, this article describes how the DAS uses ubiquitous and well understood technologies to manage and distribute the data. It then discusses how it addresses the more difficult problem of helping the public find and use the data it contains, despite its complexity of its content and organization.

  7. Computer simulation of spacecraft/environment interaction.

    Science.gov (United States)

    Krupnikov, K K; Makletsov, A A; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-10-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991 1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  8. Roadmap to the SRS computing architecture

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  9. Computer simulation of spacecraft/environment interaction

    CERN Document Server

    Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  10. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  11. Cloud Computing at the Tactical Edge

    Science.gov (United States)

    2012-10-01

    Avahi ZeroConf) Cloudlet Server IP Address/Port Broadcast Overlay HTTP Base VM Image OpenCV 2.2 OpenSSLlzmaxdelta3 3rd Party Runtime Component...the Face Recognition Server. 4.6 Face Recognition Server The Face Recognition Server is a program written in C++ that uses the OpenCV image...recognition library to process images sent from the Face Recognition Client for training or recog- nition purposes [ OpenCV 2012]. When in recognition mode, it

  12. Mastering Citrix XenServer

    CERN Document Server

    Reed, Martez

    2014-01-01

    If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.

  13. Building server capabilities in China

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum;

    2012-01-01

    The purpose of this paper is to further our understanding of multinational companies building server capabilities in China. The paper is based on the cases of two western companies with operations in China. The findings highlight a number of common patterns in the 1) managerial challenges related...... to the development of server capabilities at offshore sites, and 2) means of how these challenges can be handled....

  14. PERFORMANCE OF MULTI SERVER AUTHENTICATION AND KEY AGREEMENT WITH USER PROTECTION IN NETWORK SECURITY

    Directory of Open Access Journals (Sweden)

    NAGAMALLESWARA RAO.DASARI,

    2010-08-01

    Full Text Available Using smart cards, remote user authentication and key agreement can be simplified, flexible, and efficient for creating a secure distributed computers environment. Addition to user authenticationand key distribution, it is very useful for providing identity privacy for users. In this paper, we propose novel multi server authentication and key agreement schemes with user protection in network security. We first propose a single-server scheme and then apply this scheme to a multi-server environment. The main meritsinclude: (1 The privacy of users can be ensured; (2 a user canfreely choose his own password; (3 the computation and communication cost is very low; (4 servers and users can authenticate each other; (5 it generates a session key agreed by the server and the user; (6 our proposed schemes are Nonce-based schemes which does not have aserious time synchronization problem.

  15. Software Aging Analysis of Web Server Using Neural Networks

    Directory of Open Access Journals (Sweden)

    G.Sumathi

    2012-05-01

    Full Text Available Software aging is a phenomenon that refers to progressive performance degradation or transient failures or even crashes in long running software systems such as web servers. It mainly occurs due to the deterioration of operating system resource, fragmentation and numerical error accumulation. A primitive method to fight against software aging is software rejuvenation. Software rejuvenation is a proactive fault management technique aimed at cleaning up the system internal state to prevent the occurrence of more severe crash failures in the future. It involves occasionally stopping the running software, cleaning its internal state and restarting it. An optimized schedule for performing the software rejuvenation has to be derived in advance because a long running application could not be put down now and then as it may lead to waste of cost. This paper proposes a method to derive an accurate and optimized schedule for rejuvenation of a web server (Apache by using Radial Basis Function (RBF based Feed Forward Neural Network, a variant of Artificial Neural Networks (ANN. Aging indicators are obtained through experimental setup involving Apache web server and clients, which acts as input to the neural network model. This method is better than existing ones because usage of RBF leads to better accuracy and speed in convergence.

  16. Podemos fidelizar clientes inicialmente insatisfechos

    Directory of Open Access Journals (Sweden)

    Jesús Cambra-Fierro

    2011-01-01

    Full Text Available El paradigma relacional, dominante en el ámbito de la mercadotecnia, aboga por establecer y desarrollar relaciones duraderas con los clientes. Para ello es preciso conocer cuáles son sus necesidades y esforzarse por satisfacerlas. Los clientes quieren sentirse importantes y, por tanto, las empresas deberían preocuparse no sólo por vender, sino también por conocer su índice real de satisfacción/ insatisfacción. Por tanto, desde un punto de vista lógico este debería ser el patrón de comportamiento empresarial, los trabajos de Barroso (2008 y Coca (2008 así lo indican. Pero la realidad demuestra que esto no siempre es así. A pesar de que los clientes siempre desean sentirse atendidos, existen empresas que parecen olvidarse de esta premisa básica y, sin embargo, obtienen resultados positivos. Este trabajo tiene el objetivo de analizar la posible contribución de los procesos de recuperación de servicios en la fidelización de clientes/usuarios. Para ello tomamos como referencia el concepto de procesos de recuperación de servicio y estudiamos el contexto del sector de telefonía móvil en España.Através de un análisis de estadísticos descriptivos y de la técnica Partial Least Squares (PLS, concluimos que las empresas se comportan de manera opuesta a lo que esperan los clientes y no se preocupan realmente por reconquistar su satisfacción. Sin embargo, la opinión de los usuarios resulta muy reveladora y sugiere que es posible convertir un cliente inicialmente insatisfecho en un cliente fiel.

  17. Performance model of the Argonne Voyager multimedia server

    Energy Technology Data Exchange (ETDEWEB)

    Disz, T.; Olson, R.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    The Argonne Voyager Multimedia Server is being developed in the Futures Lab of the Mathematics and Computer Science Division at Argonne National Laboratory. As a network-based service for recording and playing multimedia streams, it is important that the Voyager system be capable of sustaining certain minimal levels of performance in order for it to be a viable system. In this article, the authors examine the performance characteristics of the server. As they examine the architecture of the system, they try to determine where bottlenecks lie, show actual vs potential performance, and recommend areas for improvement through custom architectures and system tuning.

  18. An integrated medical image database and retrieval system using a web application server.

    Science.gov (United States)

    Cao, Pengyu; Hashiba, Masao; Akazawa, Kouhei; Yamakawa, Tomoko; Matsuto, Takayuki

    2003-08-01

    We developed an Integrated Medical Image Database and Retrieval System (INIS) for easy access by medical staff. The INIS mainly consisted of four parts: specific servers to save medical images from multi-vendor modalities of CT, MRI, CR, ECG and endoscopy; an integrated image database (DB) server to save various kinds of images in a DICOM format; a Web application server to connect clients to the integrated image DB and the Web browser terminals connected to an HIS system. The INIS provided a common screen design to retrieve CT, MRI, CR, endoscopic and ECG images, and radiological reports, which would allow doctors to retrieve radiological images and corresponding reports, or ECG images of a patient simultaneously on a screen. Doctors working in internal medicine on average accessed information 492 times a month. Doctors working in cardiological and gastroenterological accessed information 308 times a month. Using the INIS, medical staff could browse all or parts of a patient's medical images and reports.

  19. DICOM image integration into an electronic medical record using thin viewing clients

    Science.gov (United States)

    Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.

    1998-07-01

    Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available

  20. Secured Data Consistency and Storage Way in Untrusted Cloud using Server Management Algorithm

    CERN Document Server

    Dinesh, C

    2011-01-01

    It is very challenging part to keep safely all required data that are needed in many applications for user in cloud. Storing our data in cloud may not be fully trustworthy. Since client doesn't have copy of all stored data, he has to depend on Cloud Service Provider. But dynamic data operations, Read-Solomon and verification token construction methods don't tell us about total storage capacity of server allocated space before and after the data addition in cloud. So we have to introduce a new proposed system of efficient storage measurement and space comparison algorithm with time management for measuring the total allocated storage area before and after the data insertion in cloud. So by using our proposed scheme, the value or weight of stored data before and after is measured by client with specified time in cloud storage area with accuracy. And here we also have proposed the multi-server restore point in server failure condition. If there occurs any server failure, by using this scheme the data can be reco...

  1. Self-regulating Message Throughput in Enterprise Messaging Servers – A Feedback Control Solution

    Directory of Open Access Journals (Sweden)

    Ravi Kumar G

    2012-01-01

    Full Text Available Enterprise Messaging is a very popular message exchange concept in asynchronous distributed computing environments. The Enterprise Messaging Servers are heavily used in building business critical Enterprise applications such as Internet based Order processing systems, pricing distribution of B2B, geographically dispersed enterprise applications. It is always desirable that Messaging Servers exhibit high performance to meet the Service Level Agreements (SLAs. There are investigations in this area of managing the performance of the distributed computing systems in different ways such as the IT administrators configuring and tuning the Messaging Servers parameters, implement complex conditional programming to handle the workload dynamics. But in practice it is extremely difficult to handle such dynamics of changing workloads in order to meet the performance requirements. Additionally it is challenging to cater to the future resource requirements based on the future workloads. Though there have been attempts to self-regulate the performance of Enterprise Messaging Servers, there is a limited investigation done in exploring feedback control systems theory in managing the Messaging Servers performance. We propose an adaptive control based solution to not only manage the performance of the servers to meet SLAs but also to pro-actively self-regulate the performance such that the Messaging Servers are capable to meet the current and future workloads. We implemented and evaluated our solution and observed that the control theory based solution will improve the performance of Enterprise Messaging Servers significantly.

  2. Microkernel Architecture: Making Application Servers Open to Change

    Institute of Scientific and Technical Information of China (English)

    CAODonggang; MEIHong; WANGQianxiang; HUANGGang

    2005-01-01

    Application server software is required to be highly adaptive and reconfigurable so as to satisfy the changing requirements of various component-based applications in enterprise computing environment. To meet this goal, an open-to-change architecture is a must, which challenges almost all distributed system software designers. This paper describes our work on designing an adaptive J2EE (Java 2 platform, Enterprise edition) application server named PKUAS. PKUAS has a microkernel based, service oriented architecture, which allows different services to be plugged into it and get managed conveniently. The PKUAS microkernel has well-defined structure that strictly separates management concern from business concern, which brings excellent modularity and extensibility to PKUAS without causing much performance degradation. The practices show that this approach can effectively make application servers open to change.

  3. On the Benefit of Virtualization: Strategies for Flexible Server Allocation

    CERN Document Server

    Arora, Dushyant; Schaffrath, Gregor; Schmid, Stefan

    2010-01-01

    Virtualization technology facilitates a dynamic, demand-driven allocation and migration of servers. This paper studies how the flexibility offered by network virtualization can be used to improve Quality-of-Service parameters such as latency, while taking into account allocation costs. A generic use case is considered where both the overall demand issued for a certain service (for example, an SAP application in the cloud, or a gaming application accessed) as well as the origins of the requests change over time (e.g., due to time zone effects or due to user mobility), and we present online and optimal offline strategies to compute the number and location of the servers implementing this service. These algorithms also allow us to study the fundamental benefits of dynamic resource allocation compared to static systems. Our simulation results confirm our expectations that the gain of flexible server allocation is particularly high in scenarios with moderate dynamics.

  4. Learning SQL Server Reporting Services 2012

    CERN Document Server

    Krishnaswamy, Jayaram

    2013-01-01

    The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.

  5. Implementing Citrix XenServer Quickstarter

    CERN Document Server

    Ahmed, Gohar

    2013-01-01

    Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati

  6. Beginning Microsoft SQL Server 2012 Programming

    CERN Document Server

    Atkinson, Paul

    2012-01-01

    Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho

  7. Health Monitoring and Prognostics for Computer Servers

    Data.gov (United States)

    National Aeronautics and Space Administration — Abstract Prognostics solutions for mission critical systems require a comprehensive methodology for proactively detecting and isolating failures, recommending and...

  8. Functional web applications : implementation and use of client side interpreters

    NARCIS (Netherlands)

    Jansen, J.M.

    2010-01-01

    The Internet has become a prominent platform for the deployment of computer applications. Web-browsers are an important interface for e-mail, on-line shopping, and banking applications. Despite this popularity, the development of web applications is a difficult job through their complex client-serve

  9. Distributed Digital Survey Logbook Built on GeoServer and PostGIS

    Science.gov (United States)

    Jovicic, Aleksandar; Castelli, Ana; Kljajic, Zoran

    2013-04-01

    Keeping tracks of events that happens during survey (e.g. position and time when instruments goes into the water or come on-board, depths from which samples are taken or notes about equipment malfunctions and repairs) is essential for efficient post-processing and quality control of collected data especially in case of suspicious measurements. Most scientists still using good-old-paper way for such tasks and later transform it into digital form using spreadsheet applications. This approach looks more "safe" (if person is not confident in their computer skills) but in reality it turns to be more error-prone (especially when it comes to position recording and variations of sexagesimal representations or if there are no hints which timezone was used for time recording). As cruises usually involves various teams not always interested to do own measurements at each station, keeping eye on current position is essential, especially if cruise plan is changed (due to bad weather or discovering of some underwater features that requires more attention than originally planned). Also, position is usually displayed only at one monitor (as most GPS receivers provide just serial connectivity and distribution of such signal to multiple clients requires some devices non-wide-spread on computer equipment market) so it can make messy situation in control room when everybody try to write-down current position and time. To overcome all mentioned obstacles Distributed Digital Surevey Logbook is implemented. It is built on Open Geospatial Consortium (OGC) compliant GeoServer, using PostGIS database. It can handle geospatial content (charts and cruise plans), do recording of vessel track and all kind of events that any member of team want to record. As GeoServer allows distribution of position data to unlimited number of clients (from traditional PC's and laptops to tablets and smartphones), it can decrease pressure on control room no matter if all features are used or just as distant

  10. An Efficient and Scalable Content Based Dynamic Load Balancing Using Multiparameters on Load Aware Distributed Multi-Cluster Servers

    Directory of Open Access Journals (Sweden)

    T.N.Anitha

    2011-08-01

    Full Text Available Nowadays, more people are accessing the internet service for their daily activities. This dramatically increases requirement of server utilization, bandwidth requirement and resource availability. To serve this, cluster servers are used. But as number of users increases , several challenges are faced by cluster servers like congestion, delay in serving the request, load balancing ,heterogeneity and complexity of services. The existing dynamic load balancing does not scale up the performance in an Distributed heterogeneous environment. To avoid this, we propose an efficient and scalable content based Dynamic Load Balancing using multi parameters on load aware distributed multi-cluster servers. In this paper ,because of heterogeneity the Dynamic Load Balancing takes place based on client request category and dynamically estimating server workload using multi parameters like queue size, processing speed , bandwidth utilization etc on distributed multi clustered servers. Our simulation results shows that, the proposed method dynamically andefficiently balance the load to scale up the services , reducing response time, throughput on clustered servers.

  11. Exchange Server2010只能安装在Windows Server2008R2上,这是真的吗?Windows Server2008R2支持Exchange Server2007P~?

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Windows Server 2008 SP2 64位版本和Windows Server 2008 R2都支持Exchange Server2010。Windows Server 2008 R2不支持Exchange Server2007和Exchange Server 2007 SP1。Windows Server 2008 R2预计也不会增加对Exchange Server 2007 SP2的支持。

  12. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    Science.gov (United States)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  13. Atomic algorithm and the servers' s use to find the Hamiltonian cycles

    Directory of Open Access Journals (Sweden)

    M. Sghiar

    2016-06-01

    Full Text Available Inspired by the movement of the particles in the atom, I demonstrated in [5] the existence of a polynomial algorithm of the order O(n 3 for finding Hamiltonian cycles in a graph with basis E= {x0,... , xn− 1 } . In this article I will give an improvement in space and in time of the algorithm says: we know that there exist several methods to find the Hamiltonian cycles such as the Monte Carlo method, Dynamic programming, or DNA computing. Unfortunately they are either expensive or slow to execute it. Hence the idea to use multiple servers to solve this problem : Each point xi in the graph will be considered as a server, and each server xi will communicate with each other server x j with which it is connected . And finally the server x0 will receive and display the Hamiltonian cycles if they exist.

  14. Professional Team Foundation Server 2010

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2011-01-01

    Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides

  15. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  16. Professional Team Foundation Server 2012

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2012-01-01

    A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins

  17. An Empirical Evaluation of Web System Access for Smartphone Clients

    Directory of Open Access Journals (Sweden)

    Scott Fowler

    2012-11-01

    Full Text Available As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobile smartphone device. REST based web services using the data exchange formats JSON, XML and Fast Infoset were evaluated against the existing SOAP based web service.

  18. Client-Centric Adaptive Scheduling of Service-Oriented Applications

    Institute of Scientific and Technical Information of China (English)

    Jing Wang; Li-Yong Zhang; Yan-Bo Han

    2006-01-01

    The paper proposes a client-centric computing model that allows for adaptive execution of service-oriented applications. The model can flexibly dispatch application tasks to the client side and the network side, dynamically adjust an execution scheme to adapt to environmental changes, and thus is expected to achieve better scalability, higher performance and more controllable privacy. Scheduling algorithms and the rescheduling strategies are proposed for the model.Experiments show that with the model the performance of service-oriented application execution can be improved.

  19. Analisa Delay Yang Terjadi Pada Penerapan Demilitarized Zone (DMZ Terhadap Server Universitas Andalas

    Directory of Open Access Journals (Sweden)

    Syariful Ikhwan

    2014-09-01

    Full Text Available Network security is vital to a computer network. Weaknesses in computer networks if not maintained and protected will cause harm in the form of data loss, damage of system server, not optimal in serving the user or even the loss of valuable institutional assets. To maintaining the security of computer networks and servers, various methods were developed, including firewall DMZ (Demiliterized Zone. DMZ is a firewall method of grouping the servers so that the data traffic passing could be better regulated. The results of a study of the application of the method at the University of Andalas DMZ significantly reduce attacks on existing server systems. Application of the method of the DMZ at Andalas University shows the packet delay to the entry and exit of 0.1544 ms. This delay percentage increased by 126% from the previous delay.

  20. A polling model with an autonomous server

    NARCIS (Netherlands)

    Haan, de Roland; Boucherie, Richard J.; Ommeren, van Jan-Kees C.W.

    2007-01-01

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves indepen

  1. Controlling and accessing vehicle functions by mobile from remote place by sending GPS Co-ordinates to the Web server

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how the co-ordinates from the Google map stored into database . It stored into the central web server . This co-ordinates then transfer to client program for searching the locations of particular location for electronic device . Client can access the data from internet and use into program by using API . Development of software for a particular device for putting into the vehicle has been develop. In the inbuilt circuit assigning sim card and transferring the signal to the network. Supplying a single text of co-ordinates of locations using google map in terms of latitudes and longitudes. The information in terms of string separated by comma can be extracted and stored into the database of web server . Different mobile number with locations can be stored into the database simultaneously into the server of different clients . The concept of 3 Tier Client /Server architecture is used. The sim card can access information of GPRS system with the network provider of card . Setting of electronic device signal for receiving and sending message done. Different operations can be performed on the device as it can be attached with other electronic circuit of vehicle. Windows Mobile application developed for client slide. User can take different decision of vehicle from mobile by sending sms to the device . Device receives the operation and send to the electronic circuit of vehicle for certain operations. From remote place using mobile you can get the information of your vehicle and also you can control vehicle it by providing password to the electronic circuit for authorization and authentication. The concept of vehicle security and location of vehicle can be identified. The functions of vehicle can be accessed and control like speed , brakes and lights etc as per the software application interface with electronic circuit of vehicle.

  2. Distributed control system for demand response by servers

    Science.gov (United States)

    Hall, Joseph Edward

    Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.

  3. Minimizing Thermal Stress for Data Center Servers through Thermal-Aware Relocation

    Science.gov (United States)

    Ling, T. C.; Hussain, S. A.

    2014-01-01

    A rise in inlet air temperature may lower the rate of heat dissipation from air cooled computing servers. This introduces a thermal stress to these servers. As a result, the poorly cooled active servers will start conducting heat to the neighboring servers and giving rise to hotspot regions of thermal stress, inside the data center. As a result, the physical hardware of these servers may fail, thus causing performance loss, monetary loss, and higher energy consumption for cooling mechanism. In order to minimize these situations, this paper performs the profiling of inlet temperature sensitivity (ITS) and defines the optimum location for each server to minimize the chances of creating a thermal hotspot and thermal stress. Based upon novel ITS analysis, a thermal state monitoring and server relocation algorithm for data centers is being proposed. The contribution of this paper is bringing the peak outlet temperatures of the relocated servers closer to average outlet temperature by over 5 times, lowering the average peak outlet temperature by 3.5% and minimizing the thermal stress. PMID:24987743

  4. Minimizing Thermal Stress for Data Center Servers through Thermal-Aware Relocation

    Directory of Open Access Journals (Sweden)

    Muhammad Tayyab Chaudhry

    2014-01-01

    Full Text Available A rise in inlet air temperature may lower the rate of heat dissipation from air cooled computing servers. This introduces a thermal stress to these servers. As a result, the poorly cooled active servers will start conducting heat to the neighboring servers and giving rise to hotspot regions of thermal stress, inside the data center. As a result, the physical hardware of these servers may fail, thus causing performance loss, monetary loss, and higher energy consumption for cooling mechanism. In order to minimize these situations, this paper performs the profiling of inlet temperature sensitivity (ITS and defines the optimum location for each server to minimize the chances of creating a thermal hotspot and thermal stress. Based upon novel ITS analysis, a thermal state monitoring and server relocation algorithm for data centers is being proposed. The contribution of this paper is bringing the peak outlet temperatures of the relocated servers closer to average outlet temperature by over 5 times, lowering the average peak outlet temperature by 3.5% and minimizing the thermal stress.

  5. Client Compliance with Homework Directives during Counseling.

    Science.gov (United States)

    Worthington, Everett L., Jr.

    1986-01-01

    Investigated compliance as a function of counselor, client, and therapy variables. Results indicated that variables associated with the conduct of counseling more strongly influenced compliance with homework than did either counselor or client variables. (Author/BL)

  6. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    Science.gov (United States)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  7. Team Foundation Server 2013 customization

    CERN Document Server

    Beeming, Gordon

    2014-01-01

    This book utilizes a tutorial based approach, focused on the practical customization of key features of the Team Foundation Server for collaborative enterprise software projects.This practical guide is intended for those who want to extend TFS. This book is for intermediate users who have an understanding of TFS, and basic coding skills will be required for the more complex customizations.

  8. High-throughput neuroimaging-genetics computational infrastructure.

    Science.gov (United States)

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  9. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  10. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  11. Improving Services to Gay and Lesbian Clients.

    Science.gov (United States)

    Dulaney, Diana D.; Kelly, James

    1982-01-01

    Examines the gap in the theoretical and clinical training of social workers in helping the homosexual client. Proposes specific approaches for improving services to clients who are gay or lesbian. Discusses other neglected clients including heterosexual spouses, children of a homosexual parent, and aging homosexuals. (Author/RC)

  12. Client Involvement in Home Care Practice

    DEFF Research Database (Denmark)

    Glasdam, Stinne; Henriksen, Nina; Kjær, Lone;

    2013-01-01

    Client involvement’ has been a mantra within health policies, education curricula and healthcare institutions over many years, yet very little is known about how ‘client involvement’ is practised in home-care services. The aim of this article is to analyse ‘client involvement’ in practise seen...

  13. FirebrowseR: an R client to the Broad Institute’s Firehose Pipeline

    Science.gov (United States)

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute’s RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package’s features are demonstrated by an example analysis of cancer gene expression data. Database URL: https://github.com/mariodeng/ PMID:28062517

  14. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    Science.gov (United States)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  15. On the relevance of efficient, integrated computer and network monitoring in HEP distributed online environment

    CERN Document Server

    Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G

    1996-01-01

    Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...

  16. Professional Microsoft SQL Server 2012 Administration

    CERN Document Server

    Jorgensen, Adam; LoForte, Ross; Knight, Brian

    2012-01-01

    An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s

  17. A polling model with an autonomous server

    OpenAIRE

    2007-01-01

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves independently in the system. We present the analysis for such a polling model with a so-called autonomous server. In this model, the server remains for an exogenous random time at a queue, which also impl...

  18. PROVABLE MULTI-CLONNING DYNAMIC DATA CONTROL IN CLOUD COMPUTING SYSTEMS

    OpenAIRE

    2016-01-01

    Progressively more associations are picking outsourcing information to remote cloud administration suppliers. Clients can lease the CSPs stockpiling base to store and recover practically boundless measure of information by paying expenses metered in gigabyte/month. For an expanded level of versatility, accessibility, and solidness, a few clients may need their information to be reproduced on different servers over numerous server farms. The more duplicates the CSP is requested that store, the...

  19. Performance characteristics of an IDE disks based file server in the environment of a Linux PC farm

    CERN Document Server

    Berdnikov, E B; Kardanev, A; Kotlyar, V V; Kukhtenkov, V; Lazin, Yu; Minaenko, A A; Motyakov, V; Petukhov, V; Sapunov, M; Sergeev, A; Slabospitskaya, E

    2003-01-01

    The Linux PC farm used for tests has been installed at IHEP (Protvino) in the framework of a distributed environment for future LHC computing. An important component of the farm is 1.3 TB file server. The results of studying the performances of the server as a part of the farm are presented in the report.

  20. Bringing the client back in

    DEFF Research Database (Denmark)

    Danneris, Sophie; Nielsen, Mathias Herup

    2016-01-01

    is defined, when taking the viewpoint of vulnerable unemployed subjects themselves. A group of 25 vulnerable social assistance receivers were interviewed repeatedly in a qualitative longitudinal study from 2013-2015. The analysis presents four striking discrepancies between the government rhetoric on job......Categorising the ‘job readiness’ of the unemployed client is a task of utmost importance for active labour market policies. Scholarly attention on the topic has mostly focused on either questions of political legitimacy or questions of how categories are practically negotiated in meetings between...... welfare system and client. This paper suggests a comparative design in which the government rhetoric of job readiness is contrasted with findings from a qualitative longitudinal study into the lived experiences of recent welfare reforms in Denmark. Thus, our study set out to explore how job readiness...

  1. Many-server queues with customer abandonment: Numerical analysis of their diffusion model

    Directory of Open Access Journals (Sweden)

    Shuangchi He

    2013-01-01

    Full Text Available We use a multidimensional diffusion process to approximate the dynamics of aqueue served by many parallel servers. Waiting customers in this queue may abandonthe system without service. To analyze the diffusion model, we develop a numericalalgorithm for computing its stationary distribution. A crucial part of the algorithm ischoosing an appropriate reference density. Using a conjecture on the tailbehavior of the limit queue length process, we propose a systematic approach toconstructing a reference density. With the proposed reference density, thealgorithm is shown to converge quickly in numerical experiments. Theseexperiments demonstrate that the diffusion model is a satisfactory approximation formany-server queues, sometimes for queues with as few as twenty servers.

  2. An M/M/2 Queueing System with Heterogeneous Servers Including One with Working Vacation

    Directory of Open Access Journals (Sweden)

    A. Krishnamoorthy

    2012-01-01

    Full Text Available This paper analyzes an M/M/2 queueing system with two heterogeneous servers, one of which is always available but the other goes on vacation in the absence of customers waiting for service. The vacationing server, however, returns to serve at a low rate as an arrival finds the other server busy. The system is analyzed in the steady state using matrix geometric method. Busy period of the system is analyzed and mean waiting time in the stationary regime computed. Conditional stochastic decomposition of stationary queue length is obtained. An illustrative example is also provided.

  3. Call center. Centrados en el cliente

    OpenAIRE

    Leal-Alonso-de-Castañeda, José Enrique

    2003-01-01

    La empresa actual ha de estar preparada para responder al Cliente tal y como éste espera, porque no se busca un cliente puntual, sino un cliente fiel. La globalización de la economía y del acceso a los mercados exige que la empresa sea capaz de atraer al cliente no sólo con un servicio de calidad, sino además con una atención de calidad. La implantación de un Call Center (Centro de Atención al Cliente, Centro de Atención de Llamadas) constituye por todo ello una estrategia de negocio qu...

  4. Prototype of Multifunctional Full-text Library in the Architecture Web-browser / Web-server / SQL-server

    Science.gov (United States)

    Lyapin, Sergey; Kukovyakin, Alexey

    Within the framework of the research program "Textaurus" an operational prototype of multifunctional library T-Libra v.4.1. has been created which makes it possible to carry out flexible parametrizable search within a full-text database. The information system is realized in the architecture Web-browser / Web-server / SQL-server. This allows to achieve an optimal combination of universality and efficiency of text processing, on the one hand, and convenience and minimization of expenses for an end user (due to applying of a standard Web-browser as a client application), on the other one. The following principles underlie the information system: a) multifunctionality, b) intelligence, c) multilingual primary texts and full-text searching, d) development of digital library (DL) by a user ("administrative client"), e) multi-platform working. A "library of concepts", i.e. a block of functional models of semantic (concept-oriented) searching, as well as a subsystem of parametrizable queries to a full-text database, which is closely connected with the "library", serve as a conceptual basis of multifunctionality and "intelligence" of the DL T-Libra v.4.1. An author's paragraph is a unit of full-text searching in the suggested technology. At that, the "logic" of an educational / scientific topic or a problem can be built in a multilevel flexible structure of a query and the "library of concepts", replenishable by the developers and experts. About 10 queries of various level of complexity and conceptuality are realized in the suggested version of the information system: from simple terminological searching (taking into account lexical and grammatical paradigms of Russian) to several kinds of explication of terminological fields and adjustable two-parameter thematic searching (a [set of terms] and a [distance between terms] within the limits of an author's paragraph are such parameters correspondingly).

  5. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update

    OpenAIRE

    Huynh, Tien; Rigoutsos, Isidore

    2004-01-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple s...

  6. B3: Fuzzy-Based Data Center Load Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    M. Jaiganesh

    2013-01-01

    Full Text Available Cloud computing started a new era in getting variety of information puddles through various internet connections by any connective devices. It provides pay and use method for grasping the services by the clients. Data center is a sophisticated high definition server, which runs applications virtually in cloud computing. It moves the application, services, and data to a large data center. Data center provides more service level, which covers maximum of users. In order to find the overall load efficiency, the utilization service in data center is a definite task. Hence, we propose a novel method to find the efficiency of the data center in cloud computing. The goal is to optimize date center utilization in terms of three big factors—Bandwidth, Memory, and Central Processing Unit (CPU cycle. We constructed a fuzzy expert system model to obtain maximum Data Center Load Efficiency (DCLE in cloud computing environments. The advantage of the proposed system lies in DCLE computing. While computing, it allows regular evaluation of services to any number of clients. This approach indicates that the current cloud needs an order of magnitude in data center management to be used in next generation computing.

  7. Application of the grid computing technology for fault diagnosis%数据网络技术在故障诊断中的应用

    Institute of Scientific and Technical Information of China (English)

    王明赞; 张滋业

    2007-01-01

    Introduced the basic principle and structure of the grid computing technology,as well as the application combined with the fault diagnosis expert systems.Put forward that use grid node to construct integrated fault diagnosis system based on object model,experience rule,neural network model and practice,so that the aptitude and efficiency of the diagnosis system is improved.Researched on the method of applying the grid architecture in fault diagnosis based on OGSA.It also puts forward the method of fault diagnosis in LAN.Server is made up of grid nodes,which are used for spectrum diagnosis and data tendency diagnosis.Using web Service program,can realize accessing and grid calculating between clients and server.Besides,presented the system control flow.

  8. PostgreSQL server programming

    CERN Document Server

    Krosing, Hannu

    2013-01-01

    This practical guide leads you through numerous aspects of working with PostgreSQL. Step by step examples allow you to easily set up and extend PostgreSQL. ""PostgreSQL Server Programming"" is for moderate to advanced PostgreSQL database professionals. To get the best understanding of this book, you should have general experience in writing SQL, a basic idea of query tuning, and some coding experience in a language of your choice.

  9. LISA, the next generation: from a web-based application to a fat client.

    Science.gov (United States)

    Pierlet, Noëlla; Aerts, Werner; Vanautgaerden, Mark; Van den Bosch, Bart; De Deurwaerder, André; Schils, Erik; Noppe, Thomas

    2008-01-01

    The LISA application, developed by the University Hospitals Leuven, permits referring physicians to consult the electronic medical records of their patients over the internet in a highly secure way. We decided to completely change the way we secured the application, discard the existing web application and build a completely new application, based on the in-house developed hospital information system, used in the University Hospitals Leuven. The result is a fat Java client, running on a Windows Terminal Server, secured by a commercial SSL-VPN solution.

  10. Microsoft SQL Server 2012 with Hadoop

    CERN Document Server

    Sarkar, Debarchan

    2013-01-01

    This book will be a step-by-step tutorial, which practically teaches working with big data on SQL Server through sample examples in increasing complexity.Microsoft SQL Server 2012 with Hadoop is specifically targeted at readers who want to cross-pollinate their Hadoop skills with SQL Server 2012 business intelligence and data analytics. A basic understanding of traditional RDBMS technologies and query processing techniques is essential.

  11. Understanding Ajax applications by connecting client and server-side execution traces

    NARCIS (Netherlands)

    Zaidman, A.E.; Matthijssen, N.; Storey, M.A.; Van Deursen, A.

    2012-01-01

    Ajax-enabled Web applications are a new breed of highly interactive, highly dynamic Web applications. Although Ajax allows developers to create rich Web applications, Ajax applications can be difficult to comprehend and thus to maintain. For this reason, we have created FireDetective, a tool that us

  12. Serving Clients When the Server Crashes: How Frontline Workers Cope With E-Government Challenges

    NARCIS (Netherlands)

    Tummers, L.G.; Rocco, P.

    2015-01-01

    Implementing e-government in the contemporary American state is challenging. Egovernment places high technical demands on agencies and citizens in an environment of budget austerity and political polarization. Governments developing e-government policies often mobilize frontline workers (also termed

  13. A welding document management software package based on a Client/Server structure

    Institute of Scientific and Technical Information of China (English)

    魏艳红; 杨春利; 王敏

    2003-01-01

    According to specifications for Welding Procedure Qualification of ASME IX Section and Chinese code, JB 4708-2000, a software package for managing welding documents has been rebuilt. Consequently, the new software package can be used in a Limited Area Network (LAN) with 4 different levels of authorities for different users. Therefore, the welding documents, including DWPS (Design for Welding Procedure Specifications), PQRs (Procedure Qualification Records) and WPS (Welding Procedure Specifications) can be shared within a company. At the same time, the system provides users various functions such as browsing, copying, editing, searching and printing records, and helps users to make decision of whether a new PQR test is necessary or not according to the codes above as well. Furthermore, super users can also browse the history of record modification and retrieve the records when needed.

  14. Mastering Windows Server 2008 Networking Foundations

    CERN Document Server

    Minasi, Mark; Mueller, John Paul

    2011-01-01

    Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co

  15. Windows Server 2012 : Uudet ominaisuudet ja muutokset

    OpenAIRE

    2013-01-01

    Tämän opintyön tarkoituksena on valottaa Windows Server 2012 -käyttöjärjestelmän muutoksia verrattuna vanhaan Windows Server 2008 R2 -versioon. Työ aloitettiin ennen Windows Server 2012 -julkaisua Release Candidate -version testauksella ja myöhemmin julkaisun jälkeen Windows Serverin kokeiluversiolla. Työssä on silti ajankohtaista tietoa Windows Server 2012:sta. Aluksi käsitellään Windows Servereiden kehityskaarta lyhyesti ja käsitellään uusinta Windows Serveriä tuotteena se...

  16. Mastering Microsoft Windows Small Business Server 2008

    CERN Document Server

    Johnson, Steven

    2010-01-01

    A complete, winning approach to the number one small business solution. Do you have 75 or fewer users or devices on your small-business network? Find out how to integrate everything you need for your mini-enterprise with Microsoft's new Windows Server 2008 Small Business Server, a custom collection of server and management technologies designed to help small operations run smoothly without a giant IT department. This comprehensive guide shows you how to master all SBS components as well as handle integration with other Microsoft technologies.: Focuses on Windows Server 2008 Small Business Serv

  17. Microsoft Windows Server 2012 administration instant reference

    CERN Document Server

    Hester, Matthew

    2013-01-01

    Fast, accurate answers for common Windows Server questions Serving as a perfect companion to all Windows Server books, this reference provides you with quick and easily searchable solutions to day-to-day challenges of Microsoft's newest version of Windows Server. Using helpful design features such as thumb tabs, tables of contents, and special heading treatments, this resource boasts a smooth and seamless approach to finding information. Plus, quick-reference tables and lists provide additional on-the-spot answers. Covers such key topics as server roles and functionality, u

  18. The GLEaMviz computational tool, a publicly available software to explore realistic epidemic spreading scenarios at the global scale

    Directory of Open Access Journals (Sweden)

    Quaggiotto Marco

    2011-02-01

    Full Text Available Abstract Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level

  19. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  20. Preconceptions in the nurse-client relationship.

    Science.gov (United States)

    Forchuk, C

    1994-01-01

    Nursing theorist, Hildegard Peplau (1952) has identified the concept of preconceptions as critical in the development of the therapeutic nurse-client relationship. Although stereotypes exist for both nurses and chronic psychiatric clients, very little research has been reported on the preconceptions nurses and psychiatric clients have of each other. This investigation utilized non-probability, purposive sampling of 20 newly formed nurse-client dyads within programmes serving a chronically mentally ill population in Canada. Subjects were asked to give descriptions of each other. Semantic differentials based on this feedback were then developed and administered to 124 nurse-client dyads. Clients' statements generally evaluated their nurses positively. The generally positive views expressed by nurses and clients did not reflect public stereotypes for either group. The preconceptions the clients had of their nurses, and nurses had of their clients were related to both the quality of the emerging relationship (task, bond and goals) and the duration of the orientation phase. The preconceptions were virtually unchanged over the initial 6 months of the relationship.

  1. Autonomic Performance and Power Control on Virtualized Servers:Survey, Practices, and Trends

    Institute of Scientific and Technical Information of China (English)

    周笑波; 蒋昌俊

    2014-01-01

    Modern datacenter servers hosting popular Internet services face significant and multi-facet challenges in performance and power control. The user-perceived performance is the result of a complex interaction of complex workloads in a very complex underlying system. Highly dynamic and bursty workloads of Internet services fluctuate over multiple time scales, which has a significant impact on processing and power demands of datacenter servers. High-density servers apply virtualization technology for capacity planning and system manageability. Such virtualized computer systems are increasingly large and complex. This paper surveys representative approaches to autonomic performance and power control on virtualized servers, which control the quality of service provided by virtualized resources, improve the energy efficiency of the underlying system, and reduce the burden of complex system management from human operators. It then presents three designed self-adaptive resource management techniques based on machine learning and control for percentile-based response time assurance, non-intrusive energy-efficient performance isolation, and joint performance and power guarantee on virtualized servers. The techniques were implemented and evaluated in a testbed of virtualized servers hosting benchmark applications. Finally, two research trends are identified and discussed for sustainable cloud computing in green datacenters.

  2. Performance Measurement of Cloud Computing Services

    CERN Document Server

    Suakanto, Sinung; Suhardi,; Saragih, Roberd

    2012-01-01

    Cloud computing today has now been growing as new technologies and new business models. In distributed technology perspective, cloud computing most like client-server services like web-based or web-service but it used virtual resources to execute. Currently, cloud computing relies on the use of an elastic virtual machine and the use of network for data exchange. We conduct an experimental setup to measure the quality of service received by cloud computing customers. Experimental setup done by creating a HTTP service that runs in the cloud computing infrastructure. We interest to know about the impact of increasing the number of users on the average quality received by users. The qualities received by user measured within two parameters consist of average response times and the number of requests time out. Experimental results of this study show that increasing the number of users has increased the average response time. Similarly, the number of request time out increasing with increasing number of users. It m...

  3. Mobile Virtual Environments in Pervasive Computing

    Science.gov (United States)

    Lazem, Shaimaa; Abdel-Hamid, Ayman; Gračanin, Denis; Adams, Kevin P.

    Recently, human computer interaction has shifted from traditional desktop computing to the pervasive computing paradigm where users are engaged with everywhere and anytime computing devices. Mobile virtual environments (MVEs) are an emerging research area that studies the deployment of virtual reality applications on mobile devices. MVEs present additional challenges to application developers due to the restricted resources of the mobile devices, in addition to issues that are specific to wireless computing, such as limited bandwidth, high error rate and handoff intervals. Moreover, adaptive resource allocation is a key issue in MVEs where user interactions affect system resources, which, in turn, affects the user’s experience. Such interplay between the user and the system can be modelled using game theory. This chapter presents MVEs as a real-time interactive distributed system, and investigates the challenges in designing and developing a remote rendering prefetching application for mobile devices. Furthermore, we introduce game theory as a tool for modelling decision-making in MVEs by describing a game between the remote rendering server and the mobile client.

  4. Efficient Server-Aided 2PC for Mobile Phones

    Directory of Open Access Journals (Sweden)

    Mohassel Payman

    2016-04-01

    Full Text Available Secure Two-Party Computation (2PC protocols allow two parties to compute a function of their private inputs without revealing any information besides the output of the computation. There exist low cost general-purpose protocols for semi-honest parties that can be efficiently executed even on smartphones. However, for the case of malicious parties, current 2PC protocols are significantly less efficient, limiting their use to more resourceful devices. In this work we present an efficient 2PC protocol that is secure against malicious parties and is light enough to be used on mobile phones. The protocol is an adaptation of the protocol of Nielsen et al. (Crypto, 2012 to the Server-Aided setting, a natural relaxation of the plain model for secure computation that allows the parties to interact with a server (e.g., a cloud who is assumed not to collude with any of the parties. Our protocol has two stages: In an offline stage - where no party knows which function is to be computed, nor who else is participating - each party interacts with the server and downloads a file. Later, in the online stage, when two parties decide to execute a 2PC together, they can use the files they have downloaded earlier to execute the computation with cost that is lower than the currently best semi-honest 2PC protocols. We show an implementation of our protocol for Android mobile phones, discuss several optimizations and report on its evaluation for various circuits. For example, the online stage for evaluating a single AES circuit requires only 2.5 seconds and can be further reduced to 1 second (amortized time with multiple executions.

  5. Measuring SIP proxy server performance

    CERN Document Server

    Subramanian, Sureshkumar V

    2013-01-01

    Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr

  6. IMPACTS OF APPLICATION USAGE AND LOCAL HARDWARE ON THE THROUGHPUT OF COMPUTER NETWORKS WITH DESKTOP VIRTUALIZATION

    Directory of Open Access Journals (Sweden)

    Vitor Chaves De Oliveira

    2013-01-01

    Full Text Available Currently, virtualization solutions are employed in the vast majority of organizations around the world. The reasons for this are the benefits gained by the approach, focusing on increases in security, availability and data integrity. These privileges are also present in a new technique, which emerges from this same concept and is called desktop virtualization. This method, compelled by these advantages, has grown considerably and is likely to be implemented on more than three-quarters of organizations before 2014. As it is a technique based on physical client server architecture, it conducts all its actions on a local computer and responds to user interaction, through clients that are physically elsewhere. This means that the technique depends on the communication network which makes the interaction possible. Therefore, the importance of the network is increased and it is important to study its behavior compared to a traditional desktop solution, that is, a local solution. This article demonstrates the impact related to a Quality of Service (QoS parameter, throughput, which suffered great alterations depending on the implemented computational environment. Concomitantly, other results are expressed concerning the Quality of Experience (QoE decay with a thin client and a significant benefit of virtualization on the QoS, when remote access is required.

  7. Berkeley Phylogenomics Group web servers: resources for structural phylogenomic analysis.

    Science.gov (United States)

    Glanville, Jake Gunn; Kirshner, Dan; Krishnamurthy, Nandini; Sjölander, Kimmen

    2007-07-01

    Phylogenomic analysis addresses the limitations of function prediction based on annotation transfer, and has been shown to enable the highest accuracy in prediction of protein molecular function. The Berkeley Phylogenomics Group provides a series of web servers for phylogenomic analysis: classification of sequences to pre-computed families and subfamilies using the PhyloFacts Phylogenomic Encyclopedia, FlowerPower clustering of proteins sharing the same domain architecture, MUSCLE multiple sequence alignment, SATCHMO simultaneous alignment and tree construction and SCI-PHY subfamily identification. The PhyloBuilder web server provides an integrated phylogenomic pipeline starting with a user-supplied protein sequence, proceeding to homolog identification, multiple alignment, phylogenetic tree construction, subfamily identification and structure prediction. The Berkeley Phylogenomics Group resources are available at http://phylogenomics.berkeley.edu.

  8. The Configuration Strategies on Caching for Web Servers

    Institute of Scientific and Technical Information of China (English)

    GUO Chengcheng; ZHANG Li; YAN Puliu

    2006-01-01

    The Web cluster has been a popular solution of network server system because of its scalability and cost effective ness. The cache configured in servers can result in increasing significantly performance. In this paper, we discuss the suitable configuration strategies for caching dynamic content by our experimental results. Considering the system itself can provide support for caching static Web page, such as computer memory cache and disk's own cache, we adopt a special pattern that only caches dynamic Web page in some experiments to enlarge cache space. The paper is introduced three different replacement algorithms in our cache proxy module to test the practical effects of caching dynamic pages under different conditions. The paper is chiefly analyzed the influences of generated time and accessed frequency on caching dynamic Web pages. The paper is also provided the detailed experiment results and main conclusions in the paper.

  9. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  10. The Impact of Client Expertise, Client Gender and Auditor Gender on Auditors' Judgments

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); J.E. Hunton (James); M.I. Gomaa (Mohamed)

    2006-01-01

    textabstractThe purpose of the current study is to assess the extent to which auditors’ judgments are affected by client expertise, client gender and auditor gender. Prior audit research suggests that auditors place more weight on evidence received from clients who possess higher, relative to lower

  11. Linux Server Hacks, 2 Tips & Tools for Connecting, Monitoring, and Troubleshooting

    CERN Document Server

    von Hagen, William

    2009-01-01

    This handy reference offers 100 completely new server management tips and techniques designed to improve your productivity and sharpen your administrative skills. Each hack represents a clever way to accomplish a specific task, saving you countless hours of searching for the right answer. And you don't have to be a system administrator with hundreds of boxen to get something useful from this book as many of the hacks apply equally well to a single system or a home network. Whether they help you recover lost data, collect information from distributed clients, or synchronize administrative envir

  12. AUTHENTICATION ALGORITHM FOR PARTICIPANTS OF INFORMATION INTEROPERABILITY IN PROCESS OF OPERATING SYSTEM REMOTE LOADING ON THIN CLIENT

    Directory of Open Access Journals (Sweden)

    Y. A. Gatchin

    2016-05-01

    Full Text Available Subject of Research.This paper presents solution of authentication problem for all components of information interoperabilityin process of operation system network loading on thin client from terminal server. System Definition. In the proposed solution operation system integrity check is made by hardware-software module, including USB-token with protected memory for secure storage of cryptographic keys and loader. The key requirement for the solution is mutual authentication of four participants: terminal server, thin client, token and user. We have created two algorithms for the problem solution. The first of the designed algorithms compares the encrypted one-time password (random number with the reference value stored in the memory of the token and updates this number in case of successful authentication. The second algorithm uses the public and private keys of the token and the server. As a result of cryptographic transformation, participants are authenticated and the secure channel is formed between the token, thin client and terminal server. Main Results. Additional research was carried out to find out if the designed algorithms meet the necessary requirements. Criteria used included applicability in a multi-access terminal system architecture, potential threats evaluation and overall system security. According to analysis results, it is recommended to use the algorithm based on PKI due to its high scalability and usability. High level of data security is proved as a result of asymmetric cryptography application with the guarantee that participants' private keys are never sent in the authentication process. Practical Relevance. The designed PKI-based algorithm allows solving the problem with the use of cryptographic algorithms according to state standard even in its absence on asymmetric cryptography. Thus, it can be applied in the State Information Systems with increased requirements to information security.

  13. Combining Natural Human-Computer Interaction and Wireless Communication

    Directory of Open Access Journals (Sweden)

    Ştefan Gheorghe PENTIUC

    2011-01-01

    Full Text Available In this paper we present how human-computer interaction can be improved by using wireless communication between devices. Devices that offer a natural user interaction, like the Microsoft Surface Table and tablet PCs, can work together to enhance the experience of an application. Users can use physical objects for a more natural way of handling the virtual world on one hand, and interact with other users wirelessly connected on the other. Physical objects, that interact with the surface table, have a tag attached to them, allowing us to identify them, and take the required action. The TCP/IP protocol was used to handle the wireless communication over the wireless network. A server and a client application were developed for the used devices. To get a wide range of targeted mobile devices, different frameworks for developing cross platform applications were analyzed.

  14. Installing and Configuring Application Software on the LHC Computing Grid

    CERN Document Server

    Donno, Flavia; CERN. Geneva. IT Department

    2005-01-01

    The management of application software is major scientific and practical challenge for designers of large-scale production Grids The Large Hadron Collider Computing Grid is unique in the sense that coupling between application scientists and the resource providers is extremely loose, thus adding even more complexity to the software management problem. After an analysis of the requirements for a Grid software management service from users and site administrators perspective, we give an overview of the solution adopted by the LHC Grid infrastructure to support High Energy Physics experiments, highlighting features and current limitations. Tank&Spark is our server-client solution that extends the LHC Grid application software system and tackles some of its limitations. Tank&Spark can be used as a stand-alone service also in other Grid infrastructures. Here we illustrate the design, deployment and preliminary results obtained.

  15. Tandem queue with server slow-down

    NARCIS (Netherlands)

    D.I. Miretskiy; W.R.W. Scheinhardt; M.R.H. Mandjes

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of job

  16. What's New in Apache Web Server 22?

    CERN Document Server

    Bowen, Rich

    2007-01-01

    What's New in Apache Web Server 2.2? shows you all the new features you'll know to set up and administer the Apache 2.2 web server. Learn how to take advantage of its improved caching, proxying, authentication, and other improvements in your Web 2.0 applications.

  17. The NASA Technical Report Server

    Science.gov (United States)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  18. Organizational and Client Commitment among Contracted Employees

    Science.gov (United States)

    Coyle-Shapiro, Jacqueline A-M.; Morrow, Paula C.

    2006-01-01

    This study examines affective commitment to employing and client organizations among long-term contracted employees, a new and growing employment classification. Drawing on organizational commitment and social exchange literatures, we propose two categories of antecedents of employee commitment to client organizations. We tested our hypotheses…

  19. Client Contact versus Paperwork: A Student Perspective.

    Science.gov (United States)

    Strohmer, Douglas C.; And Others

    1979-01-01

    Surveys master's level rehabilitation counseling students and examines percentage of time students spend involved in client contact and paperwork during their internship. Time spent in client contact was nearly double that spent doing paperwork for this group. Data from a number of settings are discussed. (Author)

  20. Toward Achieving the Dietitian's Goal: Client Education.

    Science.gov (United States)

    Wulf, Kathleen M.; Biltz, Peggy

    The role of the dietitian as a teacher of clients who must adhere to a special diet for health reasons is discussed. The objective of this education process is to achieve a complete understanding on the part of the client not only of what is allowed in the diet but also why it is desirable. The dietitian in the professional role as an educator…

  1. MISTIC: mutual information server to infer coevolution

    DEFF Research Database (Denmark)

    Simonetti, Franco L.; Teppa, Elin; Chernomoretz, Ariel

    2013-01-01

    MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization...... of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface...... containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use...

  2. Cost Effective RADIUS Authentication for Wireless Clients

    Directory of Open Access Journals (Sweden)

    Alexandru ENACEANU

    2010-12-01

    Full Text Available Network administrators need to keep administrative user information for each network device, but network devices usually support only limited functions for user management. WLAN security is a modern problem that needs to be solved and it requires a lot of overhead especially when applied to corporate wireless networks. Administrators can set up a RADIUS server that uses an external database server to handle authentication, authorization, and accounting for network security issues.

  3. Multimedia, visual computing, and the information superhighway

    Science.gov (United States)

    Kitson, Frederick L.

    1996-04-01

    The data types of graphics, images, audio and video or collectively multimedia are becoming standard components of most computer interfaces and applications. Medical imaging in particular will be able to exploit these capabilities in concert with the database engines or 'information furnaces' that will exist as part of the information superhighway. The ability to connect experts with patients electronically enables care delivery from remote diagnostics to remote surgery. Traditional visual computing tasks such as MRI, volume rendering, computer vision or image processing may also be available to more clinics and researchers as they become 'electronically local.' Video is the component of multimedia that provides the greatest sense of presence or visual realism yet has been the most difficult to offer digitally due to its high transmission, storage and computation requirements. Advanced 3D graphics have also been a scarce or at least expensive resource. This paper addresses some of the recent innovations in media processing and client/server technology that will facilitate PCs, workstations or even set-top/TV boxes to process both video and graphics in real-time.

  4. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    Science.gov (United States)

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  5. Randomized assignment of jobs to servers in heterogeneous clusters of shared servers for low delay

    Directory of Open Access Journals (Sweden)

    Arpan Mukhopadhyay

    2016-11-01

    Full Text Available We consider the problem of assignning jobs to servers in a multi-server system consisting of N parallel processor sharing servers, categorized into M (≪N different types according to their processing capacities or speeds. Jobs of random sizes arrive at the system according to a Poisson process with rate Nλ. Upon each arrival, some servers of each type are sampled uniformly at random. The job is then assigned to one of the sampled servers based on their states. We propose two schemes, which differ in the metric for choosing the destination server for each arriving job. Our aim is to reduce the mean sojourn time of the jobs in the system. It is shown that the proposed schemes achieve the maximal stability region, without requiring the knowledge of the system parameters. The performance of the system operating under the proposed schemes is analyzed in the limit as N→∞. This gives rise to a mean field limit. The mean field is shown to have a unique, globally asymptotically stable equilibrium point which approximates the stationary distribution of load at each server. Asymptotic independence among the servers is established using a notion of intra-type exchangeability which generalizes the usual notion of exchangeability. It is further shown that the tail distribution of server occupancies decays doubly exponentially for each server type. Numerical evidence shows that at high load the proposed schemes perform at least as well as other schemes that require more knowledge of the system parameters.

  6. Control server for the PS orbit acquisition system Status 2009

    CERN Document Server

    Bart-Pedersen, S; CERN. Geneva. BE Department

    2009-01-01

    CERN’s Proton Synchrotron (CPS) has been fitted with a new Trajectory Measurement System (TMS). Analogue signals from forty Beam Position Monitors (BPM) are digitized at 125 MS/s, and then further treated in the digital domain to derive positions of all individual particle bunches on the fly. Large FPGAs are used to handle the digital processing. The system fits in fourteen plug-in modules distributed over three half-width cPCI crates that store data in circular buffers. They are connected to a Linux computer by means of a private Gigabit Ethernet segment. Dedicated server software, running under Linux, knits the system into a coherent whole [1]. The corresponding low-level software using FESA (BPMOPS class) was implemented while respecting the standard interface for beam position measurements. The BPMOPS server publishes values on request after data extraction and conversion from the TMS server. This software is running on a VME Lynx-OS platform and through dedicated electronics it can therefore control th...

  7. Bringing Ad-Hoc Analytics to Big Earth Data: the EarthServer Experience

    Science.gov (United States)

    Baumann, Peter

    2014-05-01

    From the commonly accepted Vs defining the Big Data challenge - volume, velocity, variety - we more and more learn that the sheer volume is not the only, and often not even the decisive factor inhibiting access and analytics. In particular variety of data is a frequent core issue, posing manifold issues. Based on this observation we claim that a key aspect to analytics is the freedom to ask any questions, simple or complex, anytime and combining any choice of data structures, whatever diverging they may be. Actually, techniques for such "ad-hoc queries" we can learn from classical databases. Their concept of high-level query languages brings along several benefits: a uniform semantic, allowing machine-to-machine communication, including automatic generation of queries; massive server-side optimization and parallelization; and building attractive client interfaces hiding the query syntax from casual users while allowing power users to utilize it. However, these benefits used to be available only on tabular and set oriented data, text, and - more recently - graph data. With the advent of Array Databases, they become available on large multidimensional raster data assets as well, getting one step closer to the Holy Grail of itnegrated, uniform retrieval for users. ErthServer is a transatlantic initiative setting up operationa linfrastructures based on this paradigm. In our talk, we present core EarthServer technology concepts as well as a spectrum of Earth Science applications utilizing the EarthServer platform for versatile, visualisation supported analytics services. Further, we discuss the substantial impact EarthServer is having on Big Geo Data standardization in OGC and ISO. Time and Internet connection permitting a live demo can be presented.

  8. Fuzzy Modeling of Client Preference in Data-Rich Marketing Environments

    NARCIS (Netherlands)

    M. Setnes; U. Kaymak (Uzay)

    2000-01-01

    textabstractAdvances in computational methods have led, in the world of financial services, to huge databases of client and market information. In the past decade, various computational intelligence (CI) techniques have been applied in mining this data for obtaining knowledge and in-depth informatio

  9. Team-client Relationships And Extreme Programming

    Directory of Open Access Journals (Sweden)

    John Karn

    2008-01-01

    Full Text Available This paper describes a study that examined the relationship between software engineering teams who adhered to the extreme programming (XP methodology and their project clients. The study involved observing teams working on projects for clients who had commissioned a piece of software to be used in the real world. Interviews were conducted during and at the end of the project to get client opinion on how the project had progressed. Of interest to the researchers were opinions on frequency of feedback, how the team captured requirements, whether or not the iterative approach of XP proved to be helpful, and the level of contextual and software engineering knowledge the client had at the start of the project. In theory, fidelity to XP should result in enhanced communication, reduce expectation gaps, and lead to greater client satisfaction. Our results suggest that this depends heavily on the communication skills of the team and of the client, the expectations of the client, and the nature of the project.

  10. Uniform guidelines improve client care.

    Science.gov (United States)

    Barnett, B

    1994-12-01

    Uniform national guidelines on the delivery of family planning methods and services improve client care, assuming these guidelines are based on current scientific information. Compliance with these guidelines yields safe and efficient delivery of family planning services. Service providers need information, training, supplies, and guidelines to deliver quality services. Guidelines contribute to consistency among family planning programs in different settings. Even though clinics may not provide the same services, the guidelines allow them to provide the same standards of care. Specifically, eligibility criteria, contraindications, and follow-up schedules are the same regardless of the service delivery point. Various international health organizations (such as World Health Organization, USAID, Program for International Training in Health, International Planned Parenthood Federation, and Association for Voluntary Surgical Contraception) have developed guidelines for family planning service delivery. Governments can use these documents to develop national family planning guidelines and policies. They should adapt the guidelines to local needs and consider program resources. After development of the national guidelines, training, workshops, and dissemination of written materials should be provided for policymakers, physicians, nurses, and other health providers. Countries that have either developed or are working to draft their own national guidelines are Cameroon, Ghana, Mexico, and Nepal.

  11. 基于openfire和xmpp协议的Webim客户端设计与实现%Design and Implementation of Webim Client Based on Openfire and XMPP Protocol

    Institute of Scientific and Technical Information of China (English)

    左海春

    2014-01-01

    WebIM,是基于HTTP协议,系统采用B/S结构进行开发,客户端以网页的形式实现与openfire服务器及其它客户端的及时通信。这种B/S结构的系统功能在服务器端统一管理与维护,既降低了维护难度,也使系统部署费用得到减少。因此,Web IM技术将在基于即时通信及Web的远程监控、网站客服等方面有重大的意义。为解决现有WebIM系统客户端“拉”(Client_pull)模式周期请求而产生的系统消息延迟,以及客户端和服务器端通信量大的问题,提出了利用服务器“推”(Server-push)模式技术作为基础的WebIM系统,并选择openfire开源服务器及支持XMPP作为服务器和客户端的通信协议。并给出了实现HTTP长连接的策略。并开发实现了WebIM系统,采用服务器推送方式使用户在体验方面得到改进, Web用户将不会感觉到消息的延迟。%WebIM, is based on the HTTP protocol, the system is based on B/S structure, the client in the form of Webpage achieve timely communication with the openfire server and client. The function of this system in B/S structure server unified management and maintenance, not only reduce the maintenance difficulty, also makes the system deployment costs be reduced. Therefore, the Web IM technology will have great significance in the instant communication and remote control of Web based on Web service, etc.. To solve the problems of the existing WebIM system client"pull"(Client_pull) system message mode cycle request delay generated, as well as the client and server communication amount is large, the server"push"(Server-push) WebIM system mode technology as foundation, and select the openfire open source server and XMPP as the communication protocol and server the client. And gives the realization of HTTP long connection strategy. And the development of the WebIM system, uses the server push mode user experience is improved, Web users will not feel the

  12. Justified Cross-Site Scripting Attacks Prevention from Client-Side

    Directory of Open Access Journals (Sweden)

    A.MONIKA

    2014-07-01

    Full Text Available Web apps are fetching towards the overriding way to offer access to web services. In parallel, vulnerabilities of web application are being revealed and unveiled at an frightening rate. Web apps frequently make JavaScript code utilization that is entrenched into web pages to defend client-side behavior which is dynamic. This script code is accomplished in the circumstance of the client’s web browser. From malicious JavaScript code to shield the client’s environment, a mechanism known as sandboxing is utilized that confines a program to admittance only resources connected with its origin website. Regrettably, these protection mechanisms not succeed if a client can be attracted into malicious JavaScript code downloading from an in-between, faithful site. In this situation, the wicked script is approved complete entrée to each and every resource (for example cookies and authentication tokens that be in the right place to the trusted/faithful site. Those types of attacks are described as XSS (crosssite scripting attacks. Commonly, cross-site scripting attacks are simple to perform, but complicated to identify and stop. One cause is the far above the ground HTML encoding methods flexibility, presenting the attacker a lot of chances for circumventing input filters on the server-side that must put off malicious scripts from entering into trusted/faithful sites. Also, developing a client-side way out is not simple cause of the complicatedness of recognizing JavaScript code as formatted as malicious. This theory shows that noxes is the finest of our understanding the initial client-side resolution to moderate cross-site scripting attacks. Noxes works as a web proxy and utilizes both automatically and manual produced rules to moderate possible cross-site scripting efforts. Noxes efficiently defends against data outflow from the client’s environment while needs least client communication and customization attempt.

  13. Client and therapist variability in clients' perceptions of their therapists' multicultural competencies.

    Science.gov (United States)

    Owen, Jesse; Leach, Mark M; Wampold, Bruce; Rodolfa, Emil

    2011-01-01

    This study examined therapist differences in their clients' ratings of their therapists' multicultural competencies (MCCs) as well as tested whether therapists' who were rated as exhibiting more MCCs also had clients who had better therapy outcomes (N = 143 clients and 31 therapists). All clients completed at least 3 sessions. Results demonstrated that therapists accounted for less than 1% of the variance in their clients' Cross-Cultural Counseling Inventory–Revised (CCCI-R; T. D. LaFromboise, H. L. K. Coleman, & A. Hernandez, 1991) scores, suggesting that therapists did not differ in terms of how clients rated their MCCs. Therapists accounted for approximately 8.5% of the variance in therapy outcomes. For each therapist, their clients' CCCI-R scores were aggregated to provide an estimate of therapists' MCCs. Therapists' MCCs, based on aggregate CCCI-R scores, did not account for the variability in therapy outcomes that were attributed to them. Additionally, clients' race/ethnicity, therapists' race/ethnicity, or the interaction of clients'–therapists' race/ethnicity were not significantly associated with clients' perceptions of their therapists' MCCs.

  14. Client-Oriented Approach: Forming the System of Management of the Bank Relations with Clients

    Directory of Open Access Journals (Sweden)

    Zavadska Diana V.

    2015-03-01

    Full Text Available The aim of the article is to develop the theoretical principles of forming the bank relations with clients as part of the client-oriented strategy implementation. As a result of the conducted research there has been presented the definition of client-orientation, mechanism and system of management. The system of management of the bank relations with clients, the purpose and objectives of its formation have been substantiated. The hierarchy of subjects of forming and managing the process of the bank relations with client has been presented. The ways of implementing in practice the functions of the mechanism of managing relations with clients have been revealed. It has been proved that for implementation of the client-oriented approach the banking institution should have a comprehensive view of its clients’ behavior, which detailed understanding will allow for a more accurate segmentation and building individualized partnership relations. Implementing the principle of totality of client relationships level and comprehensive knowledge, development of employee behavior techniques and special techniques for working with the most valuable clients, the use of analytics and forecasting tools will provide targeting of marketing campaigns and lead to minimization of additional costs, satisfaction of every client, loyalty, increase in the market share, growth of sales volume, increase in profits of the banking institution.

  15. Web server with ATMEGA 2560 microcontroller

    Science.gov (United States)

    Răduca, E.; Ungureanu-Anghel, D.; Nistor, L.; Haţiegan, C.; Drăghici, S.; Chioncel, C.; Spunei, E.; Lolea, R.

    2016-02-01

    This paper presents the design and building of a Web Server to command, control and monitor at a distance lots of industrial or personal equipments and/or sensors. The server works based on a personal software. The software can be written by users and can work with many types of operating system. The authors were realized the Web server based on two platforms, an UC board and a network board. The source code was written in "open source" language Arduino 1.0.5.

  16. Windows Server 2012 ja Active Directory

    OpenAIRE

    2015-01-01

    Opinnäytetyön aiheena oli tutustua Windows Server 2012–ohjelmiston sisältämiin palveluihin sekä perehtyä tarkemmin Active Directoryn peruskäyttöön. Tavoitteena oli antaa lukijalle ymmärrys Windows Server 2012–ohjelmiston tarjoamista käyttömahdollisuuksista ja Active Directoryn käytöstä. Opinnäytetyön tietoperusta koostui virtuaaliympäristön käytöstä ja erilaisista Windows Server 2012–ohjelman palveluista. Tietoperusta kattoi esimerkiksi seuraavat käsitteet: Virtuaalisointi, Emulointi, Ohj...

  17. Getting started with SQL Server 2014 administration

    CERN Document Server

    Ellis, Gethyn

    2014-01-01

    This is an easytofollow handson tutorial that includes real world examples of SQL Server 2014's new features. Each chapter is explained in a stepbystep manner which guides you to implement the new technology.If you want to create an highly efficient database server then this book is for you. This book is for database professionals and system administrators who want to use the added features of SQL Server 2014 to create a hybrid environment, which is both highly available and allows you to get the best performance from your databases.

  18. Environment server. Digital field information archival technology

    Energy Technology Data Exchange (ETDEWEB)

    Kita, Nobuyuki; Kita, Yasuyo; Yang, Hai-quan [National Institute of Advanced Industrial Science and Technology, Intelligent Systems Research Institute, Tsukuba, Ibaraki (Japan)

    2002-01-01

    For the safety operation of nuclear power plants, it is important to store various information about plants for a long period and visualize those stored information as desired. The system called Environment Server is developed for realizing it. In this paper, the general concepts of Environment Server is explained and its partial implementation for archiving the image information gathered by inspection mobile robots into virtual world and visualizing them is described. An extension of Environment Server for supporting attention sharing is also briefly introduced. (author)

  19. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    Science.gov (United States)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast

  20. A Methodology and Tool for Investigation of Artifacts Left by the BitTorrent Client

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available The BitTorrent client application is a popular utility for sharing large files over the Internet. Sometimes, this powerful utility is used to commit cybercrimes, like sharing of illegal material or illegal sharing of legal material. In order to help forensics investigators to fight against these cybercrimes, we carried out an investigation of the artifacts left by the BitTorrent client. We proposed a methodology to locate the artifacts that indicate the BitTorrent client activity performed. Additionally, we designed and implemented a tool that searches for the evidence left by the BitTorrent client application in a local computer running Windows. The tool looks for the four files holding the evidence. The files are as follows: *.torrent, dht.dat, resume.dat, and settings.dat. The tool decodes the files, extracts important information for the forensic investigator and converts it into XML format. The results are combined into a single result file.

  1. FalconStor iSCSI Storage Server for Window Storage Server 2003

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    FalconStor iSCSI Storage Server for Windows Storage Server 2003是市场上第一个以Windows Storage Server 2003为平台的iSCSI Target及存储管理产品。它是通过既有的企业网络,提供全面的、高可用的及合乎成本效益的IP SAN存储系统。

  2. The Waveform Server: A Web-based Interactive Seismic Waveform Interface

    Science.gov (United States)

    Newman, R. L.; Clemesha, A.; Lindquist, K. G.; Reyes, J.; Steidl, J. H.; Vernon, F. L.

    2009-12-01

    Seismic waveform data has traditionally been displayed on machines that are either local area networked to, or directly host, a seismic networks waveform database(s). Typical seismic data warehouses allow online users to query and download data collected from regional networks passively, without the scientist directly visually assessing data coverage and/or quality. Using a suite of web-based protocols, we have developed an online seismic waveform interface that directly queries and displays data from a relational database through a web-browser. Using the Python interface to Datascope and the Python-based Twisted network package on the server side, and the jQuery Javascript framework on the client side to send and receive asynchronous waveform queries, we display broadband seismic data using the HTML Canvas element that is globally accessible by anyone using a modern web-browser. The system is used to display data from the USArray experiment, a US continent-wide migratory transportable seismic array. We are currently creating additional interface tools to create a rich-client interface for accessing and displaying seismic data that can be deployed to any system running Boulder Real Time Technology's (BRTT) Antelope Real Time System (ARTS). The software is freely available from the Antelope contributed code Git repository. Screenshot of the web-based waveform server interface

  3. Counselor Values and the Pregnant Adolescent Client.

    Science.gov (United States)

    Kennedy, Bebe C.; And Others

    1984-01-01

    Reviews options counselors can suggest to pregnant adolescents, including abortion, adoption, marriage, and single parenthood. Discusses the need for counselors to be aware of their own values and help the client explore her values. (JAC)

  4. Clients Who Frequent Madam Barnett's Emporium.

    Science.gov (United States)

    Russell, Scott

    1999-01-01

    Develops a comparison between writing tutors and prostitutes. Suggests that the intimate arrangement of people that places one in the position of professional and the other in the position of client works against collaboration. (NH)

  5. Managing Client Values in Construction Design

    DEFF Research Database (Denmark)

    Thyssen, Mikael Hygum; Emmitt, Stephen; Bonke, Sten

    2008-01-01

    In construction projects the client will comprise both owner, end-users, and the wider society, representatives of which may have conflicting goals and values; and these may not be fully realized by the stakeholders themselves. Therefore it is a great challenge to capture and manage the values...... of the multiple stakeholders that constitutes the “client”. However, seeing client satisfaction as the end-goal of construction it is imperative to make client values explicit in the early project phase and make sure that these values are reflected in all subsequent phases of design and construction....... The management challenge is further complicated by the fact that the delivery team, who are to understand and deliver client value, consists of even more different parties. To address this a Danish engineering consultancy company has, together with a major contractor, developed a value-based workshop method...

  6. Caring for Clients and Families With Anxiety

    Directory of Open Access Journals (Sweden)

    Noriko Yamamoto-Mitani

    2016-08-01

    Full Text Available This study elucidated Japanese home care nurses’ experiences of supporting clients and families with anxiety. We interviewed 10 registered nurses working in home care agencies and analyzed the data using grounded theory to derive categories pertaining to the nurses’ experiences of providing care. We conceptualized nurses’ approaches to caring for anxiety into three categories: First, they attempted to reach out for anxiety even when the client/family did not make it explicit; second, they tried to alter the outlook of the situation; and third, they created comfort in the lives of the client/family. The conceptualizations of nurses’ strategies to alleviate client/family anxiety may reflect Japanese/Eastern cultural characteristics in communication and their view of the person and social care system, but these conceptualizations may also inform the practice of Western nurses by increasing awareness of skills they may also have and use.

  7. Secure data aggregation in heterogeneous and disparate networks using stand off server architecture

    Science.gov (United States)

    Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.

    2009-04-01

    The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.

  8. Optimal Control of the D-Policy M/G/1 Queueing System with Server Breakdowns

    Directory of Open Access Journals (Sweden)

    Kuo-Hsiung Wang

    2008-01-01

    Full Text Available This study deals with a single server in the D-policy M/G/1 queueing system in which the server is turned off at the end of each complete period and is activated again only when the cumulative completion times of the customers in the system exceeds a given level D. While the server is working, he is subject to breakdowns according to a Poisson process. When the server breaks down, he requires repair at a repair facility, where the repair time obeys a general distribution. We have demonstrated that the probability that the server is busy in the steady-state is equal to the traffic intensity. The total expected cost function per customer per unit time is constructed to determine the optimal operating D-policy at a minimum cost. We use the steady-state analytic results and apply an efficient Matlab computer program to calculate the optimal value of D. Based on three different service distributions: exponential, 3-stage Erlang and deterministic, we provide extensive numerical computation for illustration purpose. Sensitivity analysis is also investigated.

  9. Obtaining the Knowledge of a Server Performance from Non-Intrusively Measurable Metrics

    Directory of Open Access Journals (Sweden)

    Satoru Ohta

    2016-04-01

    Full Text Available Most network services are provided by server computers. To provide these services with good quality, the server performance must be managed adequately. For the server management, the performance information is commonly obtained from the operating system (OS and hardware of the managed computer. However, this method has a disadvantage. If the performance is degraded by excessive load or hardware faults, it becomes difficult to collect and transmit information. Thus, it is necessary to obtain the information without interfering with the server’s OS and hardware. This paper investigates a technique that utilizes non-intrusively measureable metrics that are obtained through passive traffic monitoring and electric currents monitored by the sensors attached to the power supply. However, these metrics do not directly represent the performance experienced by users. Hence, it is necessary to discover the complicated function that maps the metrics to the true performance information. To discover this function from the measured samples, a machine learning technique based on a decision tree is examined. The technique is important because it is applicable to the power management of server clusters and the immigration control of virtual servers

  10. Mastering Windows Server 2012 R2

    CERN Document Server

    Minasi, Mark; Booth, Christian; Butler, Robert; McCabe, John; Panek, Robert; Rice, Michael; Roth, Stefan

    2013-01-01

    Check out the new Hyper-V, find new and easier ways to remotely connect back into the office, or learn all about Storage Spaces-these are just a few of the features in Windows Server 2012 R2 that are explained in this updated edition from Windows authority Mark Minasi and a team of Windows Server experts led by Kevin Greene. This book gets you up to speed on all of the new features and functions of Windows Server, and includes real-world scenarios to put them in perspective. If you're a system administrator upgrading to, migrating to, or managing Windows Server 2012 R2, find what you need to

  11. Geologic Hazards Science Center GIS Server

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS Geologic Hazards Science Center (GHSC) in Golden, CO maintains a GIS server with services pertaining to various geologic hazard disciplines involving...

  12. Conversation Threads Hidden within Email Server Logs

    Science.gov (United States)

    Palus, Sebastian; Kazienko, Przemysław

    Email server logs contain records of all email Exchange through this server. Often we would like to analyze those emails not separately but in conversation thread, especially when we need to analyze social network extracted from those email logs. Unfortunately each mail is in different record and those record are not tided to each other in any obvious way. In this paper method for discussion threads extraction was proposed together with experiments on two different data sets - Enron and WrUT..

  13. 桌面型Windows Server 2008

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    Windows Server2008是服务器系统,作为桌面使用没有winxp和win7等用起来得心应手,此文通过对该系统一些配置稍作修改,在不影响其服务器系统的性能情况下打造一款更得心应手的桌面型的windows Server 2008。

  14. P2P and its Application in Enterprise Computing%P2P及其在企业计算中的应用

    Institute of Scientific and Technical Information of China (English)

    彭舰; 杨思忠; 刘锦德

    2003-01-01

    Owing to the popularity of Napster and Guntella, the concept of P2P (Peer-to-Peer)is highlighted again.P2P is a mindset and the rethinking of the traditional network computing based on the client/server model. P2Pmeans to decentralize some aspects of a system, in order for the entities to exchange directly, which will explore theresources at the edge of network. The implication of P2P is expounded, and some typical P2P systems are listed.This paper also details the taxonomy of the architecture of P2P computing. Then, we delve into the application ofP2P in enterprise computing.

  15. A web based Publish-Subscribe framework for mobile computing

    Directory of Open Access Journals (Sweden)

    Cosmina Ivan

    2014-05-01

    Full Text Available The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in mobile environments.

  16. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    Science.gov (United States)

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  17. CloudGenius: Decision Support for Web Server Cloud Migration

    CERN Document Server

    Menzel, Michael

    2012-01-01

    Cloud computing is the latest computing paradigm that delivers hardware and software resources as virtualized services in which users are free from the burden of worrying about the low-level system administration details. Migrating Web applications to Cloud services and integrating Cloud services into existing computing infrastructures is non-trivial. It leads to new challenges that often require innovation of paradigms and practices at all levels: technical, cultural, legal, regulatory, and social. The key problem in mapping Web applications to virtualized Cloud services is selecting the best and compatible mix of software images (e.g., Web server image) and infrastructure services to ensure that Quality of Service (QoS) targets of an application are achieved. The fact that, when selecting Cloud services, engineers must consider heterogeneous sets of criteria and complex dependencies between infrastructure services and software images, which are impossible to resolve manually, is a critical issue. To overcom...

  18. Pursuing Therapeugenic Consequences of Restricting Client Smoking during Counseling.

    Science.gov (United States)

    Schneider, Lawrence J.; Dearing, Nancy

    Theorists and therapists have become increasingly attentive to the role of interpersonal behaviors that facilitate or hinder the ability of the counselor to exert influence over the client during counseling. A study was conducted to examine the impact of a counselor's preference that clients not smoke, client stress levels, client sex, and…

  19. 安装SQL Server 2000时出错中断的解决

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    我在Windows 2000下安装Microsoft SQL Server 2000时,出现以下错误信息:A previous program installation created pending file operations on the installation machine .You must restart the computer before running setup.随后安装中断。请问该如何处理?

  20. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    Science.gov (United States)

    Fahmi, Fahmi; Nasution, Tigor H

    2017-01-19

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  1. 用完成端口实现可扩展的服务器应用%Implement the Scalable Server Application with Completion Ports

    Institute of Scientific and Technical Information of China (English)

    吴星; 黄爱萍

    2002-01-01

    With the increasing variety of telecommunication business and needs for more and more client connections, the application servers are faced with the challenge of the overload of tremendous requests. On the Windows platform, components programming provides distribution structure at the business level. And at the user level, I/O completion port is a best way for the scalability. In this paper, we describe the way to use IOCP.

  2. 构建Lotus Domino的Brower/Web Server模式%Construction of the Lotus Domino Brower / Web Server model

    Institute of Scientific and Technical Information of China (English)

    李金穗

    2008-01-01

    随着企业原来开发lotus Domino/Notes的Client/Server(客户端朋艮务器)模式,改为现今Brower/Web Serv-er(浏览器/服务器)模式及其实际应用,详细讲述企业局域网B/S模式优势以及如何构建Lotus Domino/Notes的B/S运行模式的具体方法.

  3. Regulating Response Time in an Autonomic Computing System: A Comparison of Proportional Control and Fuzzy Control Approaches

    Directory of Open Access Journals (Sweden)

    Harish S. Venkatarama

    2010-10-01

    Full Text Available Ecommerce is an area where an Autonomic Computing system could be very effectively deployed. Ecommerce has created demand for high quality information technology services and businesses are seeking quality of service guarantees from their service providers. These guarantees are expressed as part of service level agreements. Properly adjusting tuning parameters for enforcement of the service level agreement is time-consuming and skills-intensive. Moreover, in case of changes to the workload, the setting of the parameters may no longer be optimum. In an ecommerce system, where the workload changes frequently, there is a need to update the parameters at regular intervals. This paper describes two approaches, one, using a proportional controller and two, using a fuzzy controller, to automate the tuning of MaxClients parameter of Apache web server based on the required response time and the current workload. This is an illustration of the self-optimizing characteristic of an autonomic computing system.

  4. MyDas, an extensible Java DAS server.

    Directory of Open Access Journals (Sweden)

    Gustavo A Salazar

    Full Text Available A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.

  5. MyDas, an extensible Java DAS server.

    Science.gov (United States)

    Salazar, Gustavo A; García, Leyla J; Jones, Philip; Jimenez, Rafael C; Quinn, Antony F; Jenkinson, Andrew M; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.

  6. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431

    Energy Technology Data Exchange (ETDEWEB)

    Alliance to Save Energy; ICF Incorporated; ERG Incorporated; U.S. Environmental Protection Agency; Brown, Richard E; Brown, Richard; Masanet, Eric; Nordman, Bruce; Tschudi, Bill; Shehabi, Arman; Stanley, John; Koomey, Jonathan; Sartor, Dale; Chan, Peter; Loper, Joe; Capana, Steve; Hedman, Bruce; Duff, Rebecca; Haines, Evan; Sass, Danielle; Fanara, Andrew

    2007-08-02

    This report was prepared in response to the request from Congress stated in Public Law 109-431 (H.R. 5646),"An Act to Study and Promote the Use of Energy Efficient Computer Servers in the United States." This report assesses current trends in energy use and energy costs of data centers and servers in the U.S. (especially Federal government facilities) and outlines existing and emerging opportunities for improved energy efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

  7. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431: Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Alliance to Save Energy; ICF Incorporated; ERG Incorporated; U.S. Environmental Protection Agency; Brown, Richard E; Brown, Richard; Masanet, Eric; Nordman, Bruce; Tschudi, Bill; Shehabi, Arman; Stanley, John; Koomey, Jonathan; Sartor, Dale; Chan, Peter; Loper, Joe; Capana, Steve; Hedman, Bruce; Duff, Rebecca; Haines, Evan; Sass, Danielle; Fanara, Andrew

    2007-08-02

    This report is the appendices to a companion report, prepared in response to the request from Congress stated in Public Law 109-431 (H.R. 5646),"An Act to Study and Promote the Use of Energy Efficient Computer Servers in the United States." This report assesses current trends in energy use and energy costs of data centers and servers in the U.S. (especially Federal government facilities) and outlines existing and emerging opportunities for improved energy efficiency. It also makes recommendations for pursuing these energy-efficiency opportunities broadly across the country through the use of information and incentive-based programs.

  8. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  9. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  10. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  11. Analyses of client variables in a series of psychotherapy sessions with two child clients.

    Science.gov (United States)

    Mook, B

    1982-04-01

    Studied the process of child psychotherapy by means of analyses of client verbal behaviors. Audio-video recordings were made of nine intermittent psychotherapy sessions with 2 child clients, aged 8 and 12. A randomized mastertape of 4-minute segments was rated for self-exploration by means of the Carkhuff scale. Transcripts were categorized by means of an extended Snyder system and a preliminary set of grammatical variables. Transcripts then were minutized, and all client variables were intercorrelated and factor-analyzed. According to the research expectations, a high level of interrater reliability for the Carkhuff scale and high levels of interjudge agreement for the extended Snyder system were found. Analyses of the client variables demonstrated the nature of each client's verbal responding as well as their pattern of change across successive therapy sessions. The overall verbal response behavior of each client was summarized best through the factor analyses. Communalities and individual differences between the clients were discussed. Future directions for the study of client variables in child psychotherapy process research were suggested.

  12. Counselor Trainees' Self-Statement Responses to Sexually and Physically Abused Clients, and Client Role Conflict.

    Science.gov (United States)

    Parisien, Lynne S.; Long, Bonita C.

    1994-01-01

    Assessed 63 female counselor trainees after viewing videotape of client reporting sexual abuse, physical abuse, or role conflict. Results indicated that trainees who expected to counsel sexually abused client increased their positive self-statements. Applied Schwartz's States-of-Mind model to self-statement ratios, and, according to model,…

  13. Paying for express checkout: competition and price discrimination in multi-server queuing systems.

    Directory of Open Access Journals (Sweden)

    Cary Deck

    Full Text Available We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus.

  14. Paying for express checkout: competition and price discrimination in multi-server queuing systems.

    Science.gov (United States)

    Deck, Cary; Kimbrough, Erik O; Mongrain, Steeve

    2014-01-01

    We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus.

  15. CentiServer: A Comprehensive Resource, Web-Based Application and R Package for Centrality Analysis.

    Directory of Open Access Journals (Sweden)

    Mahdi Jalili

    Full Text Available Various disciplines are trying to solve one of the most noteworthy queries and broadly used concepts in biology, essentiality. Centrality is a primary index and a promising method for identifying essential nodes, particularly in biological networks. The newly created CentiServer is a comprehensive online resource that provides over 110 definitions of different centrality indices, their computational methods, and algorithms in the form of an encyclopedia. In addition, CentiServer allows users to calculate 55 centralities with the help of an interactive web-based application tool and provides a numerical result as a comma separated value (csv file format or a mapped graphical format as a graph modeling language (GML file. The standalone version of this application has been developed in the form of an R package. The web-based application (CentiServer and R package (centiserve are freely available at http://www.centiserver.org/.

  16. PBOND: Web Server for the Prediction of Proline and Non-Proline cis / trans Isomerization

    Institute of Scientific and Technical Information of China (English)

    Konstantinos P. Exarchos; Themis P. Exarchos; Costas Papaloukas; Anastassios N. Troganis; Dimitrios I. Fotiadis

    2009-01-01

    PBOND is a web server that predicts the conformation of the peptide bond be-tween any two amino acids. PBOND classifies the peptide bonds into one out of four classes, namely cis imide(cis-Pro), cis amide(cis-nonPro), trans imide (trans-Pro)and trans amide(trans-nonPro). Moreover, for every prediction a reliability index is computed. The underlying structure of the server consists of three stages:(1)feature extraction,(2)feature selection and(3)peptide bond clas-sification. PBOND can handle both single sequences as well as multiple sequences for batch processing. The predictions can either be directly downloaded from the web site or returned via e-mail. The PBOND web server is freely available at http://195.251.198.21/pbond.html.

  17. Data decomposition of Monte Carlo particle transport simulations via tally servers

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  18. Web Server Failure Analysis and Treatment Measures%Web服务器故障分析及处理措施

    Institute of Scientific and Technical Information of China (English)

    陈春晓

    2014-01-01

    Web服务器出现故障不仅会对网站的安全运行造成影响,还会影响到人们的正常使用,所以要及时维护和升级服务器,保证其正常运行。以B/S架构的PACS系统为例,其Web服务器采用的是WindowsIIS,客户端浏览器的使用需要WindowsIIS处于正常情况状态。如果WindowsIIS性能出现故障,就会影响PACS系统运行的可靠性。分析了几种故障,并提出解决方法,使Web服务器稳定运行。%Web server failure will not affect the safe operation of the site, but also affect people’s normal use, so in a timely manner to maintain and upgrade the server to ensure their normal operation. With B/S structure PACS system, for example, the Web server uses a WindowsIIS, the client browser is in use need WindowsIIS normal state. If WindowsIIS performance fails, it will affect the reliability of the PACS system operation. Analysis of several failures, and propose solutions to make the Web server and stable operation.

  19. WWDC SERVER SOFTWARE INVENTORY MANAGEMENT AND AUTOMATION

    Directory of Open Access Journals (Sweden)

    M . THANJAIVADIVEL

    2012-08-01

    Full Text Available Many organizations maintain a large Data Centre for its business operation. In that different administration team works on large set of servers and performs several tasks on need basis. It’s too complicate to handle this large number of server manually in terms of maintaining its configuration, scheduled operation, administrative tasks etc. Here, we going to propose new automated technology for Server Software Configuration management. Then the entire server Configuration is maintained by this SSCMDB. Before that understand and perform the system administration task and also to identify repetitive task as well as automate them in some extent to bemore efficient, and then avoid mistakes in that. Then go through the different module of host list [A Host List is a list of hosts to be used for matching to examine whether a certain host is included in the list or not] . So that we can save time and better utilization of resources by eliminating time consuming manual process. It also usedto help the root cause analysis related issues like unplanned interruption it may leads to reduction quality. Most of the servers running on HPUX, LINUX and SOLARIS. It’s executed using the tools Perl, Shell, CGI, Java Script and MySql.

  20. An Improvement to a Multi-Client Searchable Encryption Scheme for Boolean Queries.

    Science.gov (United States)

    Jiang, Han; Li, Xue; Xu, Qiuliang

    2016-12-01

    The migration of e-health systems to the cloud computing brings huge benefits, as same as some security risks. Searchable Encryption(SE) is a cryptography encryption scheme that can protect the confidentiality of data and utilize the encrypted data at the same time. The SE scheme proposed by Cash et al. in Crypto2013 and its follow-up work in CCS2013 are most practical SE Scheme that support Boolean queries at present. In their scheme, the data user has to generate the search tokens by the counter number one by one and interact with server repeatedly, until he meets the correct one, or goes through plenty of tokens to illustrate that there is no search result. In this paper, we make an improvement to their scheme. We allow server to send back some information and help the user to generate exact search token in the search phase. In our scheme, there are only two round interaction between server and user, and the search token has [Formula: see text] elements, where n is the keywords number in query expression, and [Formula: see text] is the minimum documents number that contains one of keyword in query expression, and the computation cost of server is [Formula: see text] modular exponentiation operation.

  1. Web-Beagle: a web server for the alignment of RNA secondary structures.

    Science.gov (United States)

    Mattei, Eugenio; Pietrosanto, Marco; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2015-07-01

    Web-Beagle (http://beagle.bio.uniroma2.it) is a web server for the pairwise global or local alignment of RNA secondary structures. The server exploits a new encoding for RNA secondary structure and a substitution matrix of RNA structural elements to perform RNA structural alignments. The web server allows the user to compute up to 10 000 alignments in a single run, taking as input sets of RNA sequences and structures or primary sequences alone. In the latter case, the server computes the secondary structure prediction for the RNAs on-the-fly using RNAfold (free energy minimization). The user can also compare a set of input RNAs to one of five pre-compiled RNA datasets including lncRNAs and 3' UTRs. All types of comparison produce in output the pairwise alignments along with structural similarity and statistical significance measures for each resulting alignment. A graphical color-coded representation of the alignments allows the user to easily identify structural similarities between RNAs. Web-Beagle can be used for finding structurally related regions in two or more RNAs, for the identification of homologous regions or for functional annotation. Benchmark tests show that Web-Beagle has lower computational complexity, running time and better performances than other available methods.

  2. Server Side Applications And Plugins Architecture For The Analysis Of Geospatial Information And The Management Of Water Resources

    Science.gov (United States)

    Pierleoni, Arnaldo; Casagrande, Luca; Bellezza, Michele; Casadei, Stefano

    2010-05-01

    The need for increasingly complex geospatial algorithms dedicated to the management of water resources, the fact that many of them require specific knowledge and the need for dedicated computing machines has led to the necessity of centralizing and sharing all the server applications and the plugins developed. For this purpose, a Web Processing Service (WPS) that can make available to users a range of geospatial analysis algorithms, geostatistics, remote sensing procedures and that can be used simply by providing data and input parameters and download the results has been developed. The core of the system infrastructure is a GRASS GIS, which acts as a computational engine, providing more than 350 forms of analysis and the opportunity to create new and ad hoc procedures. The implementation of the WPS was performed using the software PyWPS written in Python that is easily manageable and configurable. All these instruments are managed by a daemon named "Arcibald" specifically created for the purpose of listing the order of the requests that come from the users. In fact, it may happen that there are already ongoing processes so the system will queue the new ones registering the request and running it only when the previous calculations have been completed. However, individual Geoprocessing have an indicator to assess the resources necessary to implement it, enabling you to run geoprocesses that do not require excessive computing time in parallel. This assessment is also made in relation to the size of the input file provided. The WPS standard defines methods for accessing and running Geoprocessing regardless of the client used, however, the project has been developed specifically for a graphical client to access the resources. The client was built as a plugin for the GIS QGis Software which provides the most common tools for the view and the consultation of geographically referenced data. The tool was tested using the data taken during the bathymetric campaign at the

  3. A Comprehensive Study about Cloud Computing Security: Issues, Applications and Challenges

    Directory of Open Access Journals (Sweden)

    Sima Ghoflgary

    2014-11-01

    Full Text Available Cloud computing provides facilities for users to save their data or information in servers which are connected through Internet or Intranet. Further, users can run their applications with the help of software provided by cloud computing servers without installing that software in their own personal computers. Since many users access to cloud computing servers for various goals, therefore one of the main problem in this regard is providing security in access, usage, share or running users’ programs by cloud computing sources or servers. This paper attempts to study security issues, applications and its challenges on cloud computing

  4. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  5. A Secure Mechanism to Supervise Automotive Sensor Network by Client on Smart Phone

    Directory of Open Access Journals (Sweden)

    T R Yashavanth

    2013-03-01

    Full Text Available This paper presents a proposal on design of a secure client on smart phone to monitor automotive sensor network. Recently, more and more vehicles, such as BMW X5, are connected from outside via smart phone [3]. From smart phone, users can use the internet resources in automotive. Users can monitor the automotives by using their smart phones. When the automotive is moving or stolen by robber, alert information will be reported to users and users can even brake their automotive via smart phone in emergency status by sending control command to the vehicle information gateway. So client software in smart phone is required to monitor the sensor network in automotives. In order to prevent malicious attack on the client software by malicious attackers which are usually the robbers, there should be a mechanism provided to the clients which offers security by considering few criteria’s like power of computation, level of security and consumption of power. This proposed method uses IDEA for the encryption of all the messages since IDEA has high level of security and suitable to implement in software and also demands on computational power is less. Record management set is being suggested for the storage of critical data which is a Java MIDlet based mechanism. Between client software and its gateway a communication management on transaction is also proposed. The verification of the key updating process is verified with model checking in UPPAAL [7].

  6. LabKey Server: An open source platform for scientific data integration, analysis and collaboration

    Directory of Open Access Journals (Sweden)

    Lum Karl

    2011-03-01

    Full Text Available Abstract Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i Submitting specimens requests across collaborating organizations (ii Graphically defining new experimental data types, metadata and wizards for data collection (iii Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v Interacting dynamically with external data sources (vi Tracking study participants and cohorts over time (vii Developing custom interfaces using client libraries (viii Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36

  7. Indoor Location Fingerprinting with Heterogeneous Clients

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2011-01-01

    Heterogeneous wireless clients measure signal strength differently. This is a fundamental problem for indoor location fingerprinting, and it has a high impact on the positioning accuracy. Mapping-based solutions have been presented that require manual and error-prone calibration for each new client....... This article presents hyperbolic location fingerprinting, which records fingerprints as signal strength ratios between pairs of base stations instead of absolute signal strength values. This article also presents an automatic mapping-based method that avoids calibration by learning from online measurements...

  8. IMPROVING FAULT TOLERANT RESOURCE OPTIMIZED AWARE JOB SCHEDULING FOR GRID COMPUTING

    Directory of Open Access Journals (Sweden)

    K. Nirmala Devi

    2014-01-01

    Full Text Available Workflow brokers of existing Grid Scheduling Systems are lack of cooperation mechanism which causes inefficient schedules of application distributed resources and it also worsens the utilization of various resources including network bandwidth and computational cycles. Furthermore considering the literature, all of these existing brokering systems primarily evolved around models of centralized hierarchical or client/server. In such models, vital responsibility such as resource discovery is delegated to the centralized server machines, thus they are associated with well-known disadvantages regarding single point of failure, scalability and network congestion at links that are leading to the server. In order to overcome these issues, we implement a new approach for decentralized cooperative workflow scheduling in a dynamically distributed resource sharing environment of Grids. The various actors in the system namely the users who belong to multiple control domains, workflow brokers and resources work together enabling a single cooperative resource sharing environment. But this approach ignored the fact that each grid site may have its own fault-tolerance strategy because each site is itself an autonomous domain. For instance, if a grid site handles the job check-pointing mechanism, each computation node must have the ability of periodical transmission of transient state of the job execution by computational node to the server. When there is a failure of job, it will migrate to another computational node and resume from the last stored checkpoint. A Glow worm Swarm Optimization (GSO for job scheduling is used to address the issue of heterogeneity in fault-tolerance of computational grid but Weighted GSO that overcomes the position update imperfections of general GSO in a more efficient manner shown during comparison analysis. This system supports four kinds of fault-tolerance mechanisms, including the job migration, job retry, check-pointing and

  9. A Single-server Discrete-time Retrial G-queue with Server Breakdowns and Repairs

    Institute of Scientific and Technical Information of China (English)

    Jin-ting Wang; Peng Zhang

    2009-01-01

    This paper concerns a discrete-time Geo/Geo/1 retrial queue with both positive and negative customers where the server is subject to breakdowns and repairs due to negative arrivals.The arrival of a negative customer causes one positive customer to be killed if any is present,and simultaneously breaks the server down.The server is sent to repair immediately and after repair it is as good as new.The negative customer also causes the server breakdown if the server is found idle,but has no effect on the system if the server is under repair.We analyze the Markov chain underlying the queueing system and obtain its ergodicity condition.The generating function of the number of customers in the orbit and in the system are also obtained,along with the marginal distributions of the orbit size when the server is idle,busy or down.Finally,we present some numerical examples to illustrate the influence of the parameters on several performance characteristics of the system.

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  11. Windows Server® 2008 Inside Out

    CERN Document Server

    Stanek, William R

    2009-01-01

    Learn how to conquer Windows Server 2008-from the inside out! Designed for system administrators, this definitive resource features hundreds of timesaving solutions, expert insights, troubleshooting tips, and workarounds for administering Windows Server 2008-all in concise, fast-answer format. You will learn how to perform upgrades and migrations, automate deployments, implement security features, manage software updates and patches, administer users and accounts, manage Active Directory® directory services, and more. With INSIDE OUT, you'll discover the best and fastest ways to perform core a

  12. Weather station with a web server

    OpenAIRE

    Repinc, Matej

    2013-01-01

    In this diploma thesis we present the process of making a cheap weather station using Arduino prototyping platform and its functionality. The weather station monitors current temperature, humidity of air and air pressure. The station has its own simple HTTP server that is used to relay current data in two different formats: JSON encoded data and simple HTML website. The weather station can also send data to a pre-defined server used for data collection. We implemented a web site where data an...

  13. Instant Hyper-v Server Virtualization starter

    CERN Document Server

    Eguibar, Vicente Rodriguez

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service

  14. The Giga View Multiprocessor Multidisk Image Server

    Directory of Open Access Journals (Sweden)

    B. A. Gennart

    1996-01-01

    Full Text Available Professionals in various fields such as medical imaging, biology, and civil engineering require rapid access to huge amounts of pixmap image data. Multimedia interfaces further increase the need for large image databases. To fulfill these requirements, the GigaView parallel image server architecture relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one disk. This contribution reviews the design of the GigaView hardware and file system, compares it to other storage servers available on the market, and evaluates fields of applications for the architecture.

  15. Professional Microsoft SQL Server 2012 Integration Services

    CERN Document Server

    Knight, Brian; Moss, Jessica M; Davis, Mike; Rock, Chris

    2012-01-01

    An in-depth look at the radical changes to the newest release of SISS Microsoft SQL Server 2012 Integration Services (SISS) builds on the revolutionary database product suite first introduced in 2005. With this crucial resource, you will explore how this newest release serves as a powerful tool for performing extraction, transformation, and load operations (ETL). A team of SQL Server experts deciphers this complex topic and provides detailed coverage of the new features of the 2012 product release. In addition to technical updates and additions, the authors present you with a new set of SISS b

  16. WebLogic Server 9.0

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    8月9日,BEA WebLogic Server 9.0全面上市。BEA公司产品执行副总裁黄卫文(WaiWong)表示:“WebLogic Server9.0将使用户在转向SOA的同时,能够继续在复杂的异构环境中实现提高效率、降低IT成本和零宕机这些核心目标。”

  17. Instant Team Foundation Server 2012 and Project Server 2010 integration how-to

    CERN Document Server

    Gauvin, Gary P

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Get the job done and learn as you go. A how-To book with practical recipes accompanied with rich screenshots for easy comprehension.The How-to style is a very practical book which will take the reader through the process of garnering a basic understanding of TFS and Project Server with practical tutorials and recipes.This book is for users who want to integrate TFS 2012 and Project Server 2010. Readers are expected to know some basic Windows Server commands and account management, a

  18. Transparent Computing System Based on Hierarchical Cache%基于分级Cache的透明计算系统

    Institute of Scientific and Technical Information of China (English)

    谭成辉; 杨磊; 文建国; 李肯立

    2011-01-01

    A novel transparent computing system called HCTS is proposed in this paper. HCTS adopts the (m)erarchical cache strategy in client and server respectively to improve the I/O performance of the system. In order to improve the cache hit ratio according to the given environment of transparent computing, this paper presents a modified LRU replacement algorithm called LRU-AFS based on the count threshold of data accessing times, which is used to distinguish between frequently used data and rarely used data. Test results show that, with growing in number of clients in LAN, HCTS can better improve and enhance the client I/O performance compared to general TS, reduce the network traffic dramatically, shorten the boot time significantly and increase the random read-write throughput of clients effectively.%设计并实现一个基于分级Cache的透明计算系统HCTS,在系统客户端和服务端采用两级缓存来提升I/O性能.在缓存的管理策略上,针对透明计算应用环境,以提高缓存命中率为主要目标,提出一种基于访问频率计数阈值的改进LRU置换算法LRU-AFS.测试结果表明,当网络环境中的客户主机数不断增加时,与普通透明计算系统TS相比,HCTS能够在减少网络流量的同时大幅缩短客户机启动时间,提高随机读写吞吐量.

  19. Design and Test of Application-Specific Integrated Circuits by use of Mobile Clients

    Directory of Open Access Journals (Sweden)

    Michael Auer

    2009-02-01

    Full Text Available The aim of this work is to develop a simultaneous multi user access system – READ (Remote ASIC Design and Test that allows users to perform test and measurements remotely via clients running on mobile devices as well as on standard PCs. The system also facilitates the remote design of circuits with the PAC-Designer The system is controlled by LabVIEW and was implemented using a Data Acquisition Card from National instruments. Such systems are specially suited for manufacturing process monitoring and control. The performance of the simultaneous access was tested under load with a variable number of users. The server implements a queue that processes user’s commands upon request.

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...