WorldWideScience

Sample records for 4-tiered client-server distributed

  1. A Multidatabase System as 4-Tiered Client-Server Distributed Heterogeneous Database System

    Mohammad Ghulam Ali

    2009-01-01

    In this paper, we describe a multidatabase system as 4-tiered Client-Server DBMS architectures. We discuss their functional components and provide an overview of their performance characteristics. The first component of this proposed system is a web-based interface or Graphical User Interface, which resides on top of the Client Application Program, the second component of the system is a client Application program running in an application server, which resides on top of the Global Database M...

  2. Client/server models for transparent, distributed computational resources

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  3. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  4. Client-server, distributed database strategies in a healthcare record system for a homeless population.

    Chueh, H C; Barnett, G O

    1993-01-01

    A computer-based healthcare record system being developed for Boston's Healthcare for the Homeless Program (BHCHP) uses client-server and distributed database technologies to enhance the delivery of healthcare to patients of this unusual population. The needs of physicians, nurses and social workers are specifically addressed in the application interface so that an integrated approach to healthcare for this population can be facilitated. These patients and their providers have unique medical information needs that are supported by both database and applications technology. To integrate the information capabilities with the actual practice of providers of care to the homeless, this computer-based record system is designed for remote and portable use over regular phone lines. An initial standalone system is being used at one major BHCHP site of care. This project describes methods for creating a secure, accessible, and scalable computer-based medical record using client-server, distributed database design. PMID:8130445

  5. Group-oriented coordination models for distributed client-server computing

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  6. Distributed analysis with CRAB: The client-server architecture evolution and commissioning

    Codispoti, G.; /INFN, Bologna /Bologna U.; Cinquilli, M.; /INFN, Perugia; Fanfani, A.; /Bologna U.; Fanzago, F.; /CERN /INFN, CNAF; Farina, F.; /CERN /INFN, Milan Bicocca; Lacaprara, S.; /INFN, Legnaro; Miccio, V.; /CERN /INFN, CNAF; Spiga, D.; /CERN /INFN, Perugia /Perugia U.; Vaandering, E.; /Fermilab

    2008-01-01

    CRAB (CMS Remote Analysis Builder) is the tool used by CMS to enable running physics analysis in a transparent manner over data distributed across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts. CRAB can be used as a direct interface to the computing system or can delegate the user task to a server. Major efforts have been dedicated to the client-server system development, allowing the user to deal only with a simple and intuitive interface and to delegate all the work to a server. The server takes care of handling the users jobs during the whole lifetime of the users task. In particular, it takes care of the data and resources discovery, process tracking and output handling. It also provides services such as automatic resubmission in case of failures, notification to the user of the task status, and automatic blacklisting of sites showing evident problems beyond what is provided by existing grid infrastructure. The CRAB Server architecture and its deployment will be presented, as well as the current status and future development. In addition the experience in using the system for initial detector commissioning activities and data analysis will be summarized.

  7. Client/server study

    Dezhgosha, Kamyar; Marcus, Robert; Brewster, Stephen

    1995-01-01

    The goal of this project is to find cost-effective and efficient strategies/solutions to integrate existing databases, manage network, and improve productivity of users in a move towards client/server and Integrated Desktop Environment (IDE) at NASA LeRC. The project consisted of two tasks as follows: (1) Data collection, and (2) Database Development/Integration. Under task 1, survey questionnaires and a database were developed. Also, an investigation on commercially available tools for automated data-collection and net-management was performed. As requirements evolved, the main focus has been task 2 which involved the following subtasks: (1) Data gathering/analysis of database user requirements, (2) Database analysis and design, making recommendations for modification of existing data structures into relational database or proposing a common interface to access heterogeneous databases(INFOMAN system, CCNS equipment list, CCNS software list, USERMAN, and other databases), (3) Establishment of a client/server test bed at Central State University (CSU), (4) Investigation of multi-database integration technologies/ products for IDE at NASA LeRC, and (5) Development of prototypes using CASE tools (Object/View) for representative scenarios accessing multi-databases and tables in a client/server environment. Both CSU and NASA LeRC have benefited from this project. CSU team investigated and prototyped cost-effective/practical solutions to facilitate NASA LeRC move to a more productive environment. CSU students utilized new products and gained skills that could be a great resource for future needs of NASA.

  8. Framework for Deploying Client/Server Distributed Database System for effective Human Resource Information Management Systems in Imo State Civil Service of Nigeria

    Josiah Ahaiwe; Nwaokonkwo Obi

    2012-01-01

    The information system is an integrated system that holds financial and personnel records of persons working in various branches of Imo state civil service. The purpose is to harmonize operations, reduce or if possible eliminate redundancy and control the introduction of “ghost workers” and fraud in pension management. In this research work, an attempt is made to design a frame work for deploying a client/server distributed database system for a human resource information management system wi...

  9. Open client/server computing and middleware

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  10. Framework for Deploying Client/Server Distributed Database System for effective Human Resource Information Management Systems in Imo State Civil Service of Nigeria

    Josiah Ahaiwe

    2012-08-01

    Full Text Available The information system is an integrated system that holds financial and personnel records of persons working in various branches of Imo state civil service. The purpose is to harmonize operations, reduce or if possible eliminate redundancy and control the introduction of “ghost workers” and fraud in pension management. In this research work, an attempt is made to design a frame work for deploying a client/server distributed database system for a human resource information management system with a scope on Imo state civil service in Nigeria. The system consists of a relational database of personnel variables which could be shared by various levels of management in all the ministries’ and their branches located all over the state. The server is expected to be hosted in the accountant general’s office. The system is capable of handling recruitment and promotions issues, training, monthly remunerations, pension and gratuity issues, and employment history, etc.

  11. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical ...

  12. A client/server approach to telemedicine.

    Vaughan, B. J.; Torok, K. E.; Kelly, L. M.; Ewing, D J; Andrews, L. T.

    1995-01-01

    This paper describes the Medical College of Ohio's efforts in developing a client/server telemedicine system. Telemedicine vastly improves the ability of a medical center physician or specialist to interactively consult with a physician at a remote health care facility. The patient receives attention more quickly, he and his family do not need to travel long distances to obtain specialists' services, and the primary care physician can be involved in diagnosis and developing a treatment progra...

  13. A client/server approach to telemedicine.

    Vaughan, B J; Torok, K E; Kelly, L M; Ewing, D J; Andrews, L T

    1995-01-01

    This paper describes the Medical College of Ohio's efforts in developing a client/server telemedicine system. Telemedicine vastly improves the ability of a medical center physician or specialist to interactively consult with a physician at a remote health care facility. The patient receives attention more quickly, he and his family do not need to travel long distances to obtain specialists' services, and the primary care physician can be involved in diagnosis and developing a treatment program [1, 2]. Telemedicine consultations are designed to improve access to health services in underserved urban and rural communities and reduce isolation of rural practitioners [3]. PMID:8563396

  14. Client Server design and implementation issues in the Accelerator Control System environment

    In distributed system communication software design, the Client Server model has been widely used. This paper addresses the design and implementation issues of such a model, particularly when used in Accelerator Control Systems. in designing the Client Server model one needs to decide how the services will be defined for a server, what types of messages the server will respond to, which data formats will be used for the network transactions and how the server will be located by the client. Special consideration needs to be given to error handling both on the server and client side. Since the server usually is located on a machine other than the client, easy and informative server diagnostic capability is required. The higher level abstraction provided by the Client Server model simplifies the application writing, however fine control over network parameters is essential to improve the performance. Above mentioned design issues and implementation trade-offs are discussed in this paper

  15. Client-Server Password Recovery (Extended Abstract)

    Chmielewski, Łukasz; van Rossum, Peter

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the password. These protocols can be easily adapted to the personal entropy setting, where a user can recover a password only if he can answer a large enough subset of personal questions. We introduce client-server password recovery methods, in which the recovery data are stored at the server, and the recovery procedures are integrated into the login procedures. These methods apply to two of the most common types of password based authentication systems. The security of these solutions is significantly better than the security of presently proposed password recovery schemes. Our protocols are based on a variation of threshold encryption that may be of independent interest.

  16. The new client/server model in large container inspection system

    The author presents a new client/server model in distributed networking environment of the large container inspection system. The authors illustrate the structure and characteristics of the model, and introduce the transmittal dispatching technology of server communication which is based on simulating three-passage structure

  17. The convertible client/server technology in large container inspection system

    The author presents a new convertible client/server technology in distributed networking environment of a large container inspection system. The characteristic and advantage of this technology is introduced. The authors illustrate the policy of the technology to develop the networking program, and provide one example about how to program the software in large container inspection system using the new technology

  18. The convertible client/server technology in large container inspection system

    The author presents a new convertible client/server technology in distributed networking environment of the large container inspection system. The characteristic and advantage of the technology are introduced. The authors illustrate the policy of the technology to develop the networking program, and provide one example about how to program the software in large container inspection system using the new technology

  19. Creating and optimizing client-server applications on mobile devices

    Anacleto, Ricardo; Luz, Nuno; Almeida, Ana,; Figueiredo, Lino; Novais, Paulo

    2013-01-01

    Mobile devices are embedded systems with very limited capacities that need to be considered when developing a client-server application, mainly due to technical, ergonomic and economic implications to the mobile user. With the increasing popularity of mobile computing, many developers have faced problems due to low performance of devices. In this paper, we discuss how to optimize and create client-server applications for in wireless/mobile environments, presenting techniques...

  20. Client/server approach to image capturing

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  1. Client-Server Connection Status Monitoring Using Ajax Push Technology

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  2. Improved materials management through client/server computing

    This paper reports that materials management and procurement impacts every organization within an electric utility from power generation to customer service. An efficient material management and procurement system can help improve productivity and minimize operating costs. It is no longer sufficient to simply automate materials management using inventory control systems. Smart companies are building centralized data warehouses and use the client/server style of computing to provide real time data access. This paper describes how Alabama Power Company, Southern Company Services and Digital Equipment Corporation transformed two existing applications, a purchase order application within DEC's ALL-IN-1 environment and a materials management application within an IBM CICS environment, into a data warehouse - client/server application. An application server is used to overcome incompatibilities between computing environments and provide easy, real-time access to information residing in multi-vendor environments

  3. A Client-Server System for Ubiquitous Video Service

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  4. An Object-Oriented Framework for Client-Server Applications

    When developing high-level accelerator applications it is often necessary to perform extensive calculations to generate a data set that will be used as an input for other applications. Depending on the size and complexity of these computations, regenerating the interim data sets can introduce errors or otherwise negatively impact system perform. If these computational data sets could be generated in advance and be updated continuously from changes in the accelerator, it could substantially reduce the time and effort required in performing subsequent calculations. UNIX server applications are well suited to accommodate this need by providing a centralized repository for data or computational power. Because of the inherent difficulty in writing a robust server application, the development of the network communications software is often more burdensome than the computational engine. To simplify the task of building a client/server application, we have developed an object-oriented server shell which hides the complexity of the network software development from the programmer. This document will discuss how to implement a complete client/server application using this C++ class library with a minimal understanding of network communications mechanisms

  5. Client-server computer architecture saves costs and eliminates bottlenecks

    This paper reports that workstation, client-server architecture saved costs and eliminated bottlenecks that BP Exploration (Alaska) Inc. experienced with mainframe computer systems. In 1991, BP embarked on an ambitious project to change technical computing for its Prudhoe Bay, Endicott, and Kuparuk operations on Alaska's North Slope. This project promised substantial rewards, but also involved considerable risk. The project plan called for reservoir simulations (which historically had run on a Cray Research Inc. X-MP supercomputer in the company's Houston data center) to be run on small computer workstations. Additionally, large Prudhoe Bay, Endicott, and Kuparuk production and reservoir engineering data bases and related applications also would be moved to workstations, replacing a Digital Equipment Corp. VAX cluster in Anchorage

  6. A RAD approach to client/server system development

    The capability, richness, and leverage of inexpensive commercial operating systems, off-the-shelf applications, and powerful developing tools have made building feature-rich client/server systems possible in rapid time and at low cost--ushering in a new level of systems integration not before possible. The authors achieve rapid application development (RAD) by using a flexible and extendible client/service integration framework. The framework provides the means to integrate in-house and third-party software applications with databases and expert-system knowledge bases and, where appropriate, provides communication links among the applications. The authors discuss the integration framework's capabilities, explain its underlying system architecture, and outline the methods and tools used to customize and integrate many diverse applications

  7. FRIEND Engine Framework: A real time neurofeedback client-server system for neuroimaging studies

    Rodrigo eBasilio

    2015-01-01

    Full Text Available In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of Functional Real-time Interactive Endogenous Modulation and Decoding (FRIEND. We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes.

  8. Proving the correctness of client/server software

    Eyad Alkassar; Sebastian Bogan; Wolfgang J Paul

    2009-02-01

    Remote procedure calls (RPCs) lie at the heart of any client/server software. Thus, formal specification and verification of RPC mechanisms is a prerequisite for the verification of any such software. In this paper, we present a mathematical specification of an RPC mechanism and we outline how to prove the correctness of an implementation — say written in C — of this mechanism at the code level. We define a formal model of user processes running concurrently under a simple operating system, which provides inter-process communication and portmapper system calls. A simple theory of non-interference permits us to use conventional sequential program analysis between system calls (within the concurrent model). An RPC mechanism is specified and the correctness proof for server implementations, using this mechanism, is outlined. To the best of our knowledge this is the first treatment of the correctness of an entire RPC mechanism at the code level.

  9. A client/server tape robot system implemented using CORBA (Common Object Request Broker Architecture) and C++

    The Common Object Request.Broker Architecture (CORBA) is an object-oriented communications framework which allows for the easy design and development of distributed, object-oriented applications. A COBRA-based implementation of a distributed client/server tape robot system (KIWI Tape Robot) is developed. This approach allows for a variety of data-modeling options in a distributed tape server environment. The use of C++ in the handling of HEP data which is stored in a Hierarchical Mass Storage System is demonstrated. (author)

  10. Solid Waste Information and Tracking System Client Server Conversion Project Management Plan

    GLASSCOCK, J.A.

    2000-02-10

    The Project Management Plan governing the conversion of SWITS to a client-server architecture. The PMP describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion

  11. Object-oriented designs for LHD data acquisitions using client-server model

    The LHD data acquisition system handles >600 MB data per shot. The fully distributed data processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object sharing and class libraries will provide the unified way of data handling for the network client-server programming. The object class libraries are described in C++, and the network object sharing is provided through the commercial software named HARNESS. As for the CAMAC setup, the Java script can use the C++ class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling software on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between client and server computers. A lump of the diagnostic data can be treated as part of an object by the object-oriented programming. (orig.)

  12. Features client-server bus seats reservation technology in the long-distance connection

    Радченко, К. О.; Національний технічний університет України «КПІ»; Ружевський, М. С.; Національний технічний університет України «КПІ»; Шроль, А. Ю.; Національний технічний університет України «КПІ»

    2016-01-01

    There is description of the features of the client-server technology of booking places by a bus driverwith the help of the developed software for mobile devices and tablets based on Android operatingsystem. The application allows the driver of the long-distance connection to send data about theoccupied seats in a salon during the movement of the bus on the MTE server. The application has auser-friendly interface. For client-server communication capabilities Android Studio and AndroidSDK are used

  13. The Key Implementation Technology of Client/Server's Asynchronous Communication Programs

    2002-01-01

    This paper introduces the implementation method,key technology and flowchart of Client/Server's asynchronous communication programs on Linux or Unix,and further explains a few problems to which should pay attention for improving CPU's efficiency in implementing asynchronous communication programs.

  14. Analysis of Java Client/Server and Web Programming Tools for Development of Educational Systems.

    Muldner, Tomasz

    This paper provides an analysis of old and new programming tools for development of client/server programs, particularly World Wide Web-based programs. The focus is on development of educational systems that use interactive shared workspaces to provide portable and expandable solutions. The paper begins with a short description of relevant terms.…

  15. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  16. A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke

    Mitchell, J. Ross; Sharma, Pranshu; Modi, Jayesh; Simpson, Mark; Thomas, Monroe; Michael D Hill; Goyal, Mayank

    2011-01-01

    Background Recent advances in the treatment of acute ischemic stroke have made rapid acquisition, visualization, and interpretation of images a key factor for positive patient outcomes. We have developed a new teleradiology system based on a client-server architecture that enables rapid access to interactive advanced 2-D and 3-D visualization on a current generation smartphone device (Apple iPhone or iPod Touch, or an Android phone) without requiring patient image data to be stored on the dev...

  17. How to Use a Desktop Version of a DBMS for Client-Server Applications

    Julian VASILEV

    2008-01-01

    DBMS (Data base management systems) still have a very high price for small and middle enterprises in Bulgaria. Desktop versions are free but they cannot function in multi-user environment. We will try to make an application server which will make a Desktop version of a DBMS open to many users. Thus, this approach will be appropriate for client-server applications. The author of the article gives a concise observation of the problem and a possible way of solution.

  18. Location Privacy Techniques in Client-Server Architectures

    Jensen, Christian Søndergaard; Lu, Hua; Yiu, Man Lung

    2009-01-01

    . Third, their effectiveness is independent of the distribution of other users, unlike the k-anonymity approach. The chapter characterizes the privacy models assumed by existing techniques and categorizes these according to their approach. The techniques are then covered in turn according...

  19. Research and Implementation of Client-server Based E-m ail Translator

    2001-01-01

    The design and implementation of EATS, a machine translation system for e-mail, are presented. It first puts forward the notion of "instan t machine trans lation service" and illustrates how it is provided through client-server mode i n EATS. Then this paper gives a panoramic view of the realization of Chinese-En glish bi-directional translation module through multi-engine strategy. The pro totype of the system has been successfully demonstrated in campus net in PPP mod e, with 70%~80% translation accuracy.

  20. SSDL personel dosimetry system: migration from a client - server system into a web-based system

    Personnel Dosimetry System has been used by the Secondary Standard Dosimetry Laboratory (SSDL), Nuclear Malaysia since ten years ago. The system is a computerized database system with a client-server concept. This system has been used by Film Badge Laboratory, SSDL to record details of clients, calculation of Film Badge dosage, management of radiation workers data's, generating of dosage report, retrieval of statistical reports regarding film badge usage for the purpose of reporting to monitoring bodies such as Atomic Energy Licensing Board (AELB), Ministry of Health and others. But, due to technical problems that frequently occurs, the system is going to be replaced by a newly developed web- based system called e-SSDL. This paper describe the problems that regularly occurs in the previous system, explains how the process of replacing the client-server system with a web-based system is done and the differences between the previous and current system. This paper will also present details architecture of the new system and the new process introduced in processing film badges. (Author)

  1. Solid waste information and tracking system client-server conversion project management plan

    This Project Management Plan is the lead planning document governing the proposed conversion of the Solid Waste Information and Tracking System (SWITS) to a client-server architecture. This plan presents the content specified by American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards for software development, with additional information categories deemed to be necessary to describe the conversion fully. This plan is a living document that will be reviewed on a periodic basis and revised when necessary to reflect changes in baseline design concepts and schedules. This PMP describes the background, planning and management of the SWITS conversion. It does not constitute a statement of product requirements. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents

  2. Whisker: a client-server high-performance multimedia research control system.

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described. PMID:21139173

  3. Client Server Model Based DAQ System for Real-Time Air Pollution Monitoring

    Vetrivel. P

    2014-01-01

    Full Text Available The proposed system consists of client server model based Data-Acquisition Unit. The Embedded Web Server integrates Pollution Server and DAQ that collects air Pollutants levels (CO, NO2, and SO2. The Pollution Server is designed by considering modern resource constrained embedded systems. In contrast, an application server is designed to the efficient execution of programs and scripts for supporting the construction of various applications. While a pollution server mainly deals with sending HTML for display in a web browser on the client terminal, an application server provides access to server side logic for pollutants levels to be use by client application programs. The Embedded Web Server is an arm mcb2300 board with internet connectivity and acts as air pollution server as this standalone device gathers air pollutants levels and as a Server. Embedded Web server is accessed by various clients.

  4. Remote information service access system based on a client-server-service model

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  5. Migration of the CNA maintenance information system to a client server architecture

    The paper explains the guidelines and methodology followed to carry out regulation of the CNA computerized maintenance system (SIGE) to a system with a client/server architecture based on ORACLE. The following guidelines were established to carry out migration: 1 Ensure that the new system would contain all the information of the former system, ie, no information would be lost during migration. 2 Improve the technical design of the application, while maintaining at least the functionality of the former application 3 incorporate modifications into the application which would permit incremental improvement of its functionality. 4 Carry out migration at the minimum cost in time and resources to construct the application, a strict development methodology was followed and certain standards were drawn up to significantly increase the speed. Special use was made of: 1 Data models 2 Process models which operate the data model 3 SQL-FORMS standards 4 Safety features

  6. AVAILABILITY EVALUATION OF NETWORKS: AN APPROACH FOR N-TIER CLIENT SERVER ARCHITECTURE

    Flávia S. Coelho

    2003-06-01

    Full Text Available Published work on computer network reliability frequently uses availability as a performance measure. However, although several ways of defining availability have been proposed, none capture the overall level of service obtained by client hosts in a modern n-tier client/server architecture. We propose such a measure by calculating the fraction of client hosts receiving complete services from the network. We also extend a published, efficient heuristic method for calculating availability to take into account our new proposed measure. The end result is a procedure of polynomial complexity O(nt4, where nt is the total number of components (hosts, links and interconnection equipment in the network. Numerical results of applying the method to several networks are given.

  7. Realization of client/server management information system of coal mine based on ODBC in geology and survey

    Liu, Q.; Mao, S.; Yang, F.; Han, Z. [Shandong University of Science and Technology (China). Geoscience Department

    2000-08-01

    The paper describes in detail the framework and the application theory of Open Database Connectivity (ODBC), the formation of a client/server system of geological and surveying management information system, and the connection of the various databases. Then systematically, the constitution and functional realization of the geological management information system are introduced. 5 refs., 5 figs.

  8. RANCANG BANGUN APLIKASI PEMBAYARAN SEKOLAH MENGGUNAKAN JAVA DAN MySQL BERBASIS CLIENT SERVER DI SMA YOS SUDARSO CILACAP

    Elisa Usada

    2012-08-01

    Full Text Available SMA Yos Sudarso merupakan salah satu sekolah yang sudah memanfaatkan komputer untuk menyelesaikan berbagai macam tugas, tetapi pemanfaatannya masih belum maksimal karena ada bagian administrasi yang masih menggunakan metode manual, yaitu pada administrasi pembayaran. Penelitian ini berusaha membuat sebuah aplikasi client server berteknologi JAVA dan MySQL untuk mengelola data pembayaran sekolah meliputi pembayaran SPP, pembayaran uang gedung serta pembayaran ujian. Metode waterfall digunakan sebagai acuan perancangan dan pengembangan aplikasi. Alat abstraksi sistem yang digunakan adalah use case diagram dan class diagram. Perancangan basis data dengan menggunakan ERD. Pengujian dilakukan dengan metode black box yaitu hanya menguji jalannya fungsi-fungsi yang telah direncanakan tanpa mempedulikan aspek proses internal dalam kode dan algoritma. Pengujian memberikan hasil bahwa aplikasi dapat dijalankan secara client server dan fungsi dapat berjalan semestinya. Proses back up data otomatis menjadi kekurangan dari aplikasi dalam penelitian ini.Kata kunci : aplikasi pembayaran, JAVA dan MySQL

  9. Design and implementation of an enterprise information system utilizing a component based three-tier client/server database system

    Akbay, Murat.; Lewis, Steven C.

    1999-01-01

    The Naval Security Group currently requires a modem architecture to merge existing command databases into a single Enterprise Information System through which each command may manipulate administrative data. There are numerous technologies available to build and implement such a system. Component- based architectures are extremely well-suited for creating scalable and flexible three-tier Client/Server systems because the data and business logic are encapsulated within objects, allowing them t...

  10. A portable, GUI-based, object-oriented client-server architecture for computer-based patient record (CPR) systems.

    Schleyer, T K

    1995-01-01

    Software applications for computer-based patient records require substantial development investments. Portable, open software architectures are one way to delay or avoid software application obsolescence. The Clinical Management System at Temple University School of Dentistry uses a portable, GUI-based, object-oriented client-server architecture. Two main criteria determined this approach: preservation of investment in software development and a smooth migration path to a Computer-based Patient Record. The application is separated into three layers: graphical user interface, database interface, and application functionality Implementation with generic cross-platform development tools ensures maximum portability. PMID:7662879

  11. Volume serving and media management in a networked, distributed client/server environment

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  12. Client-Server and Peer-to-Peer Ad-hoc Network for a Flexible Learning Environment

    Ferial Khaddage

    2011-01-01

    Full Text Available Peer-to-Peer (P2P networking in a mobile learning environment has become a popular topic of research. One of the new emerging research ideas is on the ability to combine P2P network with server-based network to form a strong efficient portable and compatible network infrastructure. This paper describes a unique mobile network architecture, which reflects the on-campus students’ need for a mobile learning environment. This can be achieved by combining two different networks, client-server and peer-to-peer ad-hoc to form a sold and secure network. This is accomplished by employing one peer within the ad-hoc network to act as an agent-peer to facilitate communication and information sharing between the two networks. It can be implemented without any major changes to the current network technologies, and can combine any wireless protocols such as GPRS, Wi-Fi, Bluetooth, and 3G.

  13. Performance analysis of hybrid (M/M/1 and M/M/m client server model using Queuing theory

    Saptarshi Gupta

    2013-01-01

    Full Text Available Internet use packet switching and it is called delay system. When any request comes from client side, server may serve that request immediately or it goes into queue for some time. A client is the computer, which requests the resources (mail, audio, video etc, equipped with a user interface (usually a web browser for presentation purposes. DNS (Domain name server will map the web address to its corresponding Internet protocol address. All communication takes place using transfer of packets. Packets arrive according to a Poisson process with rate λ. Router will route the request to that particular Internet Protocol (IP of the application server. The application server task is to provide the requested resources (mail, audio, video, authentication, but by calling on another server (Data server, which provides the application server with the data it requires. This paper deals with single server and multiple server queues. This paper intends to find out the Performance (average queue length, average response time, average waiting time analysis of hybrid (M/M/1, M/M/m client server model using queuing theory.

  14. Object-oriented design for LHD data acquisition using client-server model

    The LHD data acquisition system handles a huge amount of data exceeding over 600MB per shot. The fully distributed processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object-sharing and the class libraries will provide the unified way of data handling for both servers and clients program developments. The object class libraries are written in C++, and the network object-sharing is provided through a commercial software called HARNESS. As for the CAMAC setup, the Java script can use the C++ class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling softwares which work on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between clients and servers. A lump of diagnostic data mass is treated as a part of an object by the object-oriented programming. (author)

  15. Object-oriented design for LHD data acquisition using client-server model

    Kojima, M.; Nakanishi, H.; Hidekuma, S. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1997-11-01

    The LHD data acquisition system handles a huge amount of data exceeding over 600MB per shot. The fully distributed processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object-sharing and the class libraries will provide the unified way of data handling for both servers and clients program developments. The object class libraries are written in C{sub ++}, and the network object-sharing is provided through a commercial software called HARNESS. As for the CAMAC setup, the Java script can use the C{sub ++} class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling softwares which work on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between clients and servers. A lump of diagnostic data mass is treated as a part of an object by the object-oriented programming. (author)

  16. Study on the Distributed Routing Algorithm and Its Security for Peer-to-Peer Computing

    ZHOU Shi-jie

    2005-01-01

    @@ By virtue of its great efficiency and graceful architecture, the Client/Server model has been prevalent for more than twenty years, but some disadvantages are also recognized. It is not so suitable for the next generation Internet (NGI), which will provide a high-speed communication platform. Especially, the service bottleneck of Client/Server model will become more and more severe in such high-speed networking environment. Some approaches have been proposed to solve such kind of disadvantages. Among these, distributed computing is considered an important candidate for Client/Server model.

  17. Peer-assisted content distribution networks: performance gains and server capacity savings

    Rimac, I.; Borst, S.C.; Walid, A.

    2008-01-01

    Content distribution networks are experiencing tremendous growth, in terms of traffic volume, scope, and diversity, fueled by several technological advances and competing paradigms. Traditional client/server architectures as deployed in the majority of today's commercial networks provide high reliab

  18. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  19. GALEON Phase 2: Testing Gateways Between Formal Standard Interfaces and Existing Community Standard Client/server Implementations

    Domenico, B.; Nativi, S.; Woolf, A.; Whittaker, T.; Husar, R. B.; Bigagli, L.

    2006-12-01

    The Open Geospatial Consortium (OGC) Web Coverage Service (WCS) revision 1.1 specification includes many modifications that are important to the communities working with existing services and clients based on netCDF (network Common Data Form), THREDDS THematic Real-time Environmental Distributed Data Services), OPeNDAP Open-source Project for Network Data Access Protocol), and ADDE (Abstract Data Distribution Envrironment) technologies. Chief among the WCS changes is the requirement that WCS binary encoding formats have documented application profiles. NetCDF will be among the first WCS binary encoding format profiles. In addition, WCS 1.1 enables multiple fields in a coverage, 3 spatial dimensions, 2 time dimensions (e.g., the time a forecast was run and the forecast times within the run), relative time ( e.g., the latest image), non-spatial dimension (e.g., pressure or density), irregular grids. In Phase 2 of the GALEON (Geo-interface for Land, Environment, Earth, Ocean NetCDF) Interoperability experiment, the participants will 1. Implement and test clients and servers that conform to the new WCS 1.1 spec and experiment with them on a wide range of real-world datasets. 2. Test the OGC CS-W (Catalog Services for the Web) as a means for accessing lists of datasets available on WCS servers. as well as WCS. As an illustration of the challenge, the top level 3. Evaluate various OGC GML (Geography Markup Language) dialects as a means for representing the information in netCDF datasets. This will include: ncML-GML (netCDF Markup Language-GML), CSML (Climate Sciences Modeling Language), and GMLJP2 (GML for JPEG 2000). Many of the datasets and catalogs for these experiements will be from existing netCDF, THREDDS, OPeNDAP, and ADDE servers.

  20. 基于C/S模式的汽车(零部件)营销MIS的开发%Development of a Marketing MIS for Automobiles/Automobile Parts Based on Client/Server Mode

    张国方; 王宇宁; 张能立

    2001-01-01

    结合现代管理信息系统理论、计算机建模技术、信息管理技术以及我国汽车(零部件)营销的业务管理实务,提出了一种基于C/S模式的汽车(零部件)营销MIS模型,为解决汽车(零部件)企业中存在的信息传递滞后、信息提取方式原始、信息入口重复导致信息不一致以及由此产生的市场预测不准确、市场反应速度慢等问题提供了一种可行的低成本解决方案,并基于此模型成功地为某汽车企业销售公司开发出企业级MIS,使该企业的信息流能实时监控物流、资金流,并促进了企业组织流与商流的再造与重组,节省了企业的非生产性经营成本,提高了对市场把握的准确性,为企业创造了良好的经济效益。%A marketing MIS(Management Information System) based on client/server is developed for automobiles/automobile parts by using the theory of MIS, computer modeling technology, information management technology and marketing practice in China. This system consists of 11 subsystems: plan and order, physical distribution, finance management and risk control, the expenditure squaring of quality warrant and its supervision, the dynamic supervision of product quality, the sales of the parts, market forecasting, market decision-making , the interior management of enterprises.   This marketing MIS is successfully applied in a large automobile company. The material flow and the finance flow are monitored and controlled constantly through the information flow, the marketing business and the commercial flow are improved, and the cost of non-production is reduced.

  1. Client-Server Password Recovery

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the pass

  2. WIDE - A Distributed Architecture for Workflow Management

    S. Ceri; Grefen, P.W.P.J.; G. Sánchez

    1997-01-01

    This paper presents the distributed architecture of the WIDE workflow management system. We show how distribution and scalability are obtained by the use of a distributed object model, a client/server architecture, and a distributed workflow server architecture. Specific attention is paid to the extended transaction support and active rule support subarchitectures.

  3. 基于Client/Server模式的大型管理信息系统的研究%Research for Large Management Information System Based Client/Server system

    袁庆云; 朱慧

    1994-01-01

    Client/Server(客户机/服务器)结构体系是九十年代运用于信息处理的计算机主流模式.本文从Client/Server结构的优越性入手,结合LMIS(大型管理信息系统)的特点,阐述了LMIS采用Client/Server结构的主要价值.同时给出了一个试验室环境下Client/Server结构的模型实例及分析结论.

  4. Multi-tiered Client/Server Database Application Based on Web%基于Web的多层客户/服务器数据库应用程序

    李文生; 潘世兵

    2001-01-01

    讨论基于Web的多层客户/服务器数据库应用计算模型,并提出采用Delphi建立基于Web的多层客户/服务器数据库应用程序的方法和步骤。%This Paper discusses the computing model of multie-tieredclient/server database application based on Web and proposes method and steps for constructing multie-tiered client/server database application based on Web with Delphi.

  5. 基于客户端/服务端结构的牧场奶源数字化管理系统的构建%Based on Client/Server Foundation of the Grazing Milk Digital Management System

    胡玉龙; 肖建华; 王洪斌; 施路一; 赵东方

    2009-01-01

    根据牧场奶源管理的需要,采用NET软件平台,N层体系结构及客户端/服务端(client/server,C/S)模式,构建了牧场奶源管理系统,该系统可以实现牛群和个体等一般信息的管理,在此基础之上,实现了以奶源管理为中心,并可以实现产奶各项生产指标分析和统计的功能.为了实现牧场奶源管理的计算机化、规范化、数据化、透明化,为牧场企业资源计划(enterprise resource planning,ERP)管理提供数据支持.

  6. Applying Peer-to-Peer Technology to the Building of Distributed Educational Systems

    Leighton, Greg; Muldner, Tomasz

    2005-01-01

    Existing educational systems built for cooperative and collaborative activities are most often based on the client/server paradigm of distributed computing. This article shows that a new model for distributed computing, Peer-to-Peer (P2P), provides new opportunities for building distributed educational applications. It begins by reviewing general…

  7. Security in a Client/Server Environment.

    Bernbom, Gerald; And Others

    1994-01-01

    Faced with the challenge of providing security across a complex, multiprotocol institutional information network, computing services at Indiana University implemented a responsive, collaborative security architecture designed for the future. Information systems design, security principles and strategy, and implementation are described. (Author/MSE)

  8. The Distributed Workflow Management System--FlowAgent

    王文军; 仲萃豪

    2000-01-01

    While mainframe or 2-tier client/server system have serious problems in flexibility and scalability for the large-scale business processes, 3-tier client/server architecture and object-oriented system modeling which construct business process on service components seem to bring software system some scalability. As enabling infrastructure for object-oriented methodology, distributed WFMS (Work-flow Management System) can flexibly describe business rules among autonomous 'service tasks', and support scalability of large-scale business process. But current distributed WFMS still have difficulty to manage a large number of distributed tasks, the 'multi-TaskDomain' architecture of FlowAgent will try to solve this problem, and bring a dynamic and distributed environment for task-scheduling.

  9. A Portable Debugger for Parallel and Distributed Programs

    Cheng, Doreen Y.; Hood, Robert; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In this paper, we describe the design and implementation of a portable debugger for parallel and distributed programs. The design incorporates a client-server model in order to isolate non-portable debugger code from the user interface. The precise definition of a protocol for client-server interaction permits a high degree of portability of the client user interface. Replication of server components permits the implementation of a debugger for distributed computations. Portability across message passing implementations is achieved with a protocol that dictates the interaction between a message passing library and the debugger. This permits the same debugger to be used both on PVM and MTI programs. The process abstractions used for debugging message-passing programs can be easily adapted to debug HPF programs at the source level. This allows the debugger to present information hidden in tool-generated code in a meaningful manner.

  10. Development Model of "Thin Client Server, Transaction Server, Data Server" Computerized Accounting%“瘦客户机事务处理服务器数据服务器”会计电算化的发展模式

    冀永刚

    2011-01-01

    文章通过分析现代企业管理与信息技术对会计核算软件需求与影响,提出了“瘦客户机/事务处理服务器/数据服务器”会计电算化的发展模式,并分析了该模式下会计电算化的特点和制约会计电算化发展的主要原因.%By analyzing the requirements and effect of modern enterprise management and information technology to the accounting software, the paper proposed the development model of "Thin Client Server, Transaction Server, Data Server" computerized accounting", and analyzed the characteristics of the computerized accounting under this mode and the main reasons that constraints the development of computerized accounting.

  11. Client-server design provides model for 'coopetition' alliances.

    Friedman, B A; Barnes, B W

    1992-09-01

    As healthcare organizations move from isolated departments to integrated information sharing, who will pilot this change? Both the director and manager of pathology data systems at the University of Michigan Hospitals in Ann Arbor suggest vendors, system integrators and hospital administrators put aside competition and try a new concept--"coopetition"--to solve the problem. PMID:10122914

  12. Implementasi Client Server Pada Drive Thru Dengan Menggunakan Barcode

    Oktaviani, Masyita

    2012-01-01

    Technology developmentsaffecting thesystem workininstitutionsthat utilizeinformation technology inthe smoothness andspeed ofinformationdistribution process. Therefore,manual processingis still consideredineffectivebecause itstilldependent onarchivalpapersheetsarrangedinthemotor vehicletax payment system. Implementation ofinformation technology-based client serveronDriveThruwith abarcodeforvehicle taxpaymentsintended toassist the process oftax paymentsandthe tax officersin carryingthis system....

  13. Client/Server data serving for high performance computing

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  14. Developing and Marketing a Client/Server-Based Data Warehouse.

    Singleton, Michele; And Others

    1993-01-01

    To provide better access to information, the University of Arizona information technology center has designed a data warehouse accessible from the desktop computer. A team approach has proved successful in introducing and demonstrating a prototype to the campus community. (Author/MSE)

  15. Moving the Hazard Prediction and Assessment Capability to a Distributed, Portable Architecture

    Lee, RW

    2002-09-05

    The Hazard Prediction and Assessment Capability (HPAC) has been re-engineered from a Windows application with tight binding between computation and a graphical user interface (GUI) to a new distributed object architecture. The key goals of this new architecture are platform portability, extensibility, deployment flexibility, client-server operations, easy integration with other systems, and support for a new map-based GUI. Selection of Java as the development and runtime environment is the major factor in achieving each of the goals, platform portability in particular. Portability is further enforced by allowing only Java components in the client. Extensibility is achieved via Java's dynamic binding and class loading capabilities and a design by interface approach. HPAC supports deployment on a standalone host, as a heavy client in client-server mode with data stored on the client but calculations performed on the server host, and as a thin client with data and calculations on the server host. The principle architectural element supporting deployment flexibility is the use of Universal Resource Locators (URLs) for all file references. Java WebStart{trademark} is used for thin client deployment. Although there were many choices for the object distribution mechanism, the Common Object Request Broker Architecture (CORBA) was chosen to support HPAC client server operation. HPAC complies with version 2.0 of the CORBA standard and does not assume support for pass-by-value method arguments. Execution in standalone mode is expedited by having most server objects run in the same process as client objects, thereby bypassing CORBA object transport. HPAC provides four levels for access by other tools and systems, starting with a Windows library providing transport and dispersion (T&D) calculations and output generation, detailed and more abstract sets of CORBA services, and reusable Java components.

  16. A DISTRIBUTED HYPERMAP MODEL FOR INTERNET GIS

    2000-01-01

    The rapid development of Internet technology makes it possible to integrate GIS with the Internet,forming Internet GIS.Internet GIS is based on a distributed client/server architecture and TCP/IP & IIOP.When constructing and designing Internet GIS,we face the problem of how to express information units of Internet GIS.In order to solve this problem,this paper presents a distributed hypermap model for Internet GIS.This model provides a solution to organize and manage Internet GIS information units.It also illustrates relations between two information units and in an internal information unit both on clients and servers.On the basis of this model,the paper contributes to the expressions of hypermap relations and hypermap operations.The usage of this model is shown in the implementation of a prototype system.

  17. Next Generation Multimedia Distributed Data Base Systems

    Pendleton, Stuart E.

    1997-01-01

    The paradigm of client/server computing is changing. The model of a server running a monolithic application and supporting clients at the desktop is giving way to a different model that blurs the line between client and server. We are on the verge of plunging into the next generation of computing technology--distributed object-oriented computing. This is not only a change in requirements but a change in opportunities, and requires a new way of thinking for Information System (IS) developers. The information system demands caused by global competition are requiring even more access to decision making tools. Simply, object-oriented technology has been developed to supersede the current design process of information systems which is not capable of handling next generation multimedia.

  18. DIRAC - Distributed Infrastructure with Remote Agent Control

    Tsaregorodtsev, A; Closier, J; Frank, M; Gaspar, C; van Herwijnen, E; Loverre, F; Ponce, S; Graciani Diaz, R.; Galli, D; Marconi, U; Vagnoni, V; Brook, N; Buckley, A; Harrison, K; Schmelling, M; Egede, U; Bogdanchikov, A; Korolko, I; Washbrook, A; Palacios, J P; Klous, S; Saborido, J J; Khan, A; Pickford, A; Soroko, A; Romanovski, V; Patrick, G N; Kuznetsov, G; Gandelman, M

    2003-01-01

    This paper describes DIRAC, the LHCb Monte Carlo production system. DIRAC has a client/server architecture based on: Compute elements distributed among the collaborating institutes; Databases for production management, bookkeeping (the metadata catalogue) and software configuration; Monitoring and cataloguing services for updating and accessing the databases. Locally installed software agents implemented in Python monitor the local batch queue, interrogate the production database for any outstanding production requests using the XML-RPC protocol and initiate the job submission. The agent checks and, if necessary, installs any required software automatically. After the job has processed the events, the agent transfers the output data and updates the metadata catalogue. DIRAC has been successfully installed at 18 collaborating institutes, including the DataGRID, and has been used in recent Physics Data Challenges. In the near to medium term future we must use a mixed environment with different types of grid mid...

  19. Research on Maintenance Information Management System for Distributed Manufacture System

    张之敬; 戴琳; 陶俐言; 周娟

    2004-01-01

    An architecture and design of a maintenance information management system for distributed manufacture system is presented in this paper, and its related key technologies are studied and implemented also. A fre of the maintenance information management system oriented human-machine monitoring is designed, and using object-oriented method, a general maintenance information management system based on SQL server engineering database and adopted client/server/database three-layer mode can be established. Then, discussions on control technologies of maintenance information management system and remote distributed diagnostics and maintenance system are emphasized. The system is not only able to identify and diagnose faults of distributed manufacture system quickly, improve system stability, but also has intelligent maintenance functions.

  20. Query processing in distributed, taxonomy-based information sources

    Meghini, Carlo; Coltella, Veronica; Analyti, Anastasia

    2011-01-01

    We address the problem of answering queries over a distributed information system, storing objects indexed by terms organized in a taxonomy. The taxonomy consists of subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. In the first part of the paper, we consider the centralized case, deriving a hypergraph-based algorithm that is efficient in data complexity. In the second part of the paper, we consider the distributed case, presenting alternative ways implementing the centralized algorithm. These ways descend from two basic criteria: direct vs. query re-writing evaluation, and centralized vs. distributed data or taxonomy allocation. Combinations of these criteria allow to cover a wide spectrum of architectures, ranging from client-server to peer-to-peer. We evaluate the performance of the various architectures by simulation on a network with O(10^4) nodes, and derive final results. An extensive review of the relevant literature is finally included.

  1. Modular Workflow Engine for Distributed Services using Lightweight Java Clients

    Vetter, R -M; Peetz, J -V

    2009-01-01

    In this article we introduce the concept and the first implementation of a lightweight client-server-framework as middleware for distributed computing. On the client side an installation without administrative rights or privileged ports can turn any computer into a worker node. Only a Java runtime environment and the JAR files comprising the workflow client are needed. To connect all clients to the engine one open server port is sufficient. The engine submits data to the clients and orchestrates their work by workflow descriptions from a central database. Clients request new task descriptions periodically, thus the system is robust against network failures. In the basic set-up, data up- and downloads are handled via HTTP communication with the server. The performance of the modular system could additionally be improved using dedicated file servers or distributed network file systems. We demonstrate the design features of the proposed engine in real-world applications from mechanical engineering. We have used ...

  2. On the relevance of efficient, integrated computer and network monitoring in HEP distributed online environment

    Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G

    1996-01-01

    Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...

  3. A secure communications infrastructure for high-performance distributed computing

    Foster, I.; Koenig, G.; Tuecke, S. [and others

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  4. Accommodating Heterogeneity in a Debugger for Distributed Computations

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  5. Considerations for control system software verification and validation specific to implementations using distributed processor architectures

    Until recently, digital control systems have been implemented on centralized processing systems to function in one of several ways: (1) as a single processor control system; (2) as a supervisor at the top of a hierarchical network of multiple processors; or (3) in a client-server mode. Each of these architectures uses a very different set of communication protocols. The latter two architectures also belong to the category of distributed control systems. Distributed control systems can have a central focus, as in the cases just cited, or be quite decentralized in a loosely coupled, shared responsibility arrangement. This last architecture is analogous to autonomous hosts on a local area network. Each of the architectures identified above will have a different set of architecture-associated issues to be addressed in the verification and validation activities during software development. This paper summarizes results of efforts to identify, describe, contrast, and compare these issues

  6. A distributed control system status report of the munich accelerator control

    A system of computers connected by a local area network (ARCNET) controls the Munich accelerator facility. This includes ion sources, the tandem accelerator, the beam transport system, the gas handling plant, parts of experimental setup and also an ion source test bench. ARCNET is a deterministic multi-master network with arbitrary topology, using coax cables and optical fibers. Crates with single board computers and I/O-boards (analog, parallel or serial digital), dependent on the devices being controlled, are distributed all over the building. Personal computers serve as user interfaces. The LAN communication protocol is a client/server protocol. Communication language and programming language for the single board computers is Forth. The user mode drivers in the personal computers are also written in Forth. The tools for the operators are MS-Windows applications, programmed in Forth, C++ or Visual Basic. Links to MS-Office applications are available, too

  7. A Mechanism Supporting the Client/Server Relationship in the Operating System of Distributed System “THUDS”

    廖先Zhi; 金兰

    1991-01-01

    This paper presents a distributed operating system modeled as an abstract machine that provides all the distributed processes with the same set of services.The kernel of our operating system supports services which are achieved by a remote procedure call on requests by parallel processes.Therefore,a scheme for solving the client-server relationship is required.In our system there are more than one clients and,at least,a receive would be required for each.Similarly,there are more than one servers such that the send in a client should produce a message that can be received by every server.Consequently,a mechanism well suited for programming multiple-clients/single-server and single-client/multiple-servers interactions is proposed.

  8. Free Software Development. 4. Client-Server Implementation of Bone Age Assessment Calculations

    Sorana Daniela BOLBOACĂ

    2003-03-01

    Full Text Available In pediatrics, bone age also called skeletal maturity, an expression of biological maturity of a child, is an important quantitative measure for the clinical diagnosis of endocrinological problems and growth disorders. The present paper discusses a Java script implementation of Tanner-Whitehouse Method on computer, with complete graphical interface that include pictures and explanations for every bone. The program allows to select a stage (from a set of 7 or 8 stages for every bone (from a set of 20 bones, and also allow user to input some specific data such as natural age, sex, place of residence. Based on TW2 reported values, selected and input data, the program compute the bone age. Java script functions and objects were used in order to make an efficient and adaptive program. Note that in classic way, the program implementation it requires more than 160 groups of instructions only for user interface design. Using of dynamic creation of page, the program became smaller and efficient. The program was tested and put on a web server to serve for directly testing via http service and from where can also be download and runes from a personal computer without internet connection: http://vl.academicdirect.ro/medical_informatics/bone_age/v1.0/

  9. Free Software Development. 4. Client-Server Implementation of Bone Age Assessment Calculations

    Sorana Daniela BOLBOACĂ; Carmencita DENEŞ; Andrei ACHIMAŞ CADARIU BELA

    2003-01-01

    In pediatrics, bone age also called skeletal maturity, an expression of biological maturity of a child, is an important quantitative measure for the clinical diagnosis of endocrinological problems and growth disorders. The present paper discusses a Java script implementation of Tanner-Whitehouse Method on computer, with complete graphical interface that include pictures and explanations for every bone. The program allows to select a stage (from a set of 7 or 8 stages) for every bone (from a ...

  10. GPGPU Based Parallelized Client-Server Framework for Providing High Performance Computation Support

    Banerjee, Poorna; Dave, Amit

    2015-01-01

    Parallel data processing has become indispensable for processing applications involving huge data sets. This brings into focus the Graphics Processing Units (GPUs) which emphasize on many-core computing. With the advent of General Purpose GPUs (GPGPU), applications not directly associated with graphics operations can also harness the computation capabilities of GPUs. Hence, it would be beneficial if the computing capabilities of a given GPGPU could be task optimized and made available. This p...

  11. Perancangan Client Server Tanpa Harddisk Menggunakan Linux Dan Windows Server 2003.

    Setiawan, Dedi

    2011-01-01

    The objective of research is to achieve the efficiency and effectiveness in managing the computer network by using Linux with application program of Linux Terminal Server Project (LTSP) and Windows Server 2003 by using diskless computer. Thus, there are three components in this system: computer network, two operation system server and diskless computer. Computer network is a collection of multiple isolated computers but complementary in completing the task. The operation system...

  12. Design of VoIP Paralleled Client-Server Software for Multicore

    Khan, Zeeshan

    2013-01-01

    As "Voice over IP" has become more prevalent and many client and server applications have been designed for them, the VoIP industry has seen the need for faster, more capable systems to keep up. Traditionally, system speed-up has been achieved by increasing clock speeds but, conventional single-core CPU clock rates have peaked a few years ago due to very high power consumption and heating problems. Recently, system speed-up has been achieved by adding multiple processing cores to the same pro...

  13. A Client-Server Computational Tool for Integrated Artificial Intelligence Curriculum.

    Holder, Lawrence B.; Cook, Diane J.

    2001-01-01

    Describes the development of a Web-based multimedia delivery method of increasing students' interest in artificial intelligence (AI). The course material features an integrated simulation environment that allows students to develop and test AI algorithms in a dynamic and uncertain visual environment. Evaluated the effect of the simulation on the…

  14. Distributed-operating-system kernel for networked multiprocessor work stations

    Millard, B.R.

    1986-01-01

    This dissertation presents a new kernel architecture for a Distributed Operating System targeted specifically for contemporary high-performance work stations comprised of multi-microprocessor microcomputers connected by a local-area network. The motivations for and requirements of the kernel architecture provide insights and lead to a better understanding of the practical application of software-construction techniques and communication methodologies used in modern Distributed Operating Systems for multiprocessor computers. These concepts have been embedded in the BIGSAM Distributed Operating System project. Discussion centers on interprocess communication methods and kernel structure in an environment that provides a high degree of concurrency and that additionally must be portable to a range of contemporary hardware. The effects on interprocess communication and kernel structure requirements for a Distributed Operating System of loosely-coupled cooperating work stations is examined. Advantages of using the client/server model in an object-oriented capability-based architecture that provides communication by message passing and how these properties also direct the resultant kernel architecture are covered. Semantics and specifications for a current implementation of the BIGSAM Distributed Operating System kernel are presented to further illustrate the derived architecture.

  15. Smart Card Identification Management Over A Distributed Database Model

    Olatubosun Olabode

    2011-01-01

    Full Text Available Problem statement: An effective national identification system is a necessity in any national government for the proper implementation and execution of its governmental policies and duties. Approach: Such data can be held in a database relation in a distributed database environment. Till date, The Nigerian government is yet to have an effective and efficient National Identification Management System despite the huge among of money expended on the project. Results: This article presents a Smart Card Identification Management System over a Distributed Database Model. The model was implemented using a client/server architecture between a server and multiple clients. The programmable smart card to store identification detail, including the biometric feature was proposed. Among many other variables stored in the smart card includes individual information on personal identification number, gender, date of birth, place of birth, place of residence, citizenship, continuously updated information on vital status and the identity of parents and spouses. Conclusion/Recommendations: A conceptualization of the database structures and architecture of the distributed database model is presented. The designed distributed database model was intended to solve the lingering problems associated with multiple identification in a society.

  16. Distributed Intrusion Detection System for Ad hoc Mobile Networks

    Muhammad Nawaz Khan

    2012-01-01

    Full Text Available In mobile ad hoc network resource restrictions on bandwidth, processing capabilities, battery life and memory of mobile devices lead tradeoff between security and resources consumption. Due to some unique properties of MANETs, proactive security mechanism like authentication, confidentiality, access control and non-repudiation are hard to put into practice. While some additional security requirements are also needed, like cooperation fairness, location confidentiality, data freshness and absence of traffic diversion. Traditional security mechanism i.e. authentication and encryption, provide a security beach to MANETs. But some reactive security mechanism is required who analyze the routing packets and also check the overall network behavior of MANETs. Here we propose a local-distributed intrusion detection system for ad hoc mobile networks. In the proposed distributed-ID, each mobile node works as a smart agent. Data collect by node locally and it analyze that data for malicious activity. If any abnormal activity discover, it informs the surrounding nodes as well as the base station. It works like a Client-Server model, each node works in collaboration with server, updating its database each time by server using Markov process. The proposed local distributed- IDS shows a balance between false positive and false negative rate. Re-active security mechanism is very useful in finding abnormal activities although proactive security mechanism present there. Distributed local-IDS useful for deep level inspection and is suited with the varying nature of the MANETs.

  17. Medical Image Dynamic Collaborative Processing on the Distributed Environment

    2003-01-01

    A new trend in the development of medical image processing systems is to enhance the sharing of medical resources and the collaborative processing of medical specialists. This paper presents an architecture of medical image dynamic collaborative processing on the distributed environment by combining the JAVA, CORBA (Common Object Request and Broker Architecture) and the MAS (Multi-Agents System) collaborative mechanism. The architecture allows medical specialists or applications to share records and communicate with each other on the web by overcoming the shortcut of traditional approach using Common Gateway Interface (CGI) and client/server architecture, and can support the remote heterogeneous systems collaboration. The new approach improves the collaborative processing of medical data and applications and is able to enhance the interoperation among heterogeneous system. Research on the system will help the collaboration and cooperation among medical application systems distributed on the web, thus supply high quality medical service such as diagnosis and therapy to practicing specialists regardless of their actual geographic location.

  18. Distributed data collection for a database of radiological image interpretations

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  19. Performance Analysis of Hybrid Distribution in Human-Centric Multimedia Networking

    HU Yuxiang; DONG Fang; LAN Julong

    2016-01-01

    With the booming of Human-centric mul-timedia networking (HMN), there are rising amount of human-made multimedia that needs to distribute to con-sumers with higher speed and efficiency. Hybrid distribu-tion of Client/Server (C/S) and Peer-to-Peer (P2P) have been successfully deployed on the Internet and the practi-cal benefits have been widely reported, while its theoretical performance remains unknown for mass data delivery un-fortunately. This paper presents an analytical and experi-mental study on the performance of accelerating large-scale hybrid distribution over the Internet. In particular, this pa-per focuses on the user behavior in HMN and establishes a user behavior model based on the Kermack-McKendrick model in epidemiology. Analytical expressions of average delay in HMN are then derived based on C/S, P2P and hy-brid distribution, respectively. Our simulation shows how to design and deploy a hybrid distribution system of HMN that helps to bridge the gap between system ultilization and quality of service, which provides direct guidance for practical system design.

  20. A Java based environment to control and monitor distributed processing systems

    Distributed processing systems are considered to solve the challenging requirements of triggering and data acquisition systems for future HEP experiments. The aim of this work is to present a software environment to control and monitor large scale parallel processing systems based on a distributed client-server approach developed in Java. One server task may control several processing nodes, switching elements or controllers for different sub-systems. Servers are designed as multi-thread applications for efficient communications with other objects. Servers communicate between themselves by using Remote Method Invocation (RMI) in a peer-to-peer mechanism. This distributed server layer has to provide a dynamic and transparent access from any client to all the resources in the system. The graphical user interface programs, which are platform independent, may be transferred to any client via the http protocol. In this scheme the control and monitor tasks are distributed among servers and network controls the flow of information among servers and clients providing a flexible mechanism for monitoring and controlling large heterogenous distributed systems. (author)

  1. Cooperating expert systems for space station power distribution management

    In a complex system such as the manned Space Station, it is deemed necessary that many expert systems must perform tasks in a concurrent and cooperative manner. An important question to arise is: what cooperative-task-performing models are appropriate for multiple expert systems to jointly perform tasks. The solution to this question will provide a crucial automation design criteria for the Space Station complex systems architecture. Based on a client/server model for performing tasks, the authors have developed a system that acts as a front-end to support loosely-coupled communications between expert systems running on multiple Symbolics machines. As an example, they use the two ART*-based expert systems to demonstrate the concept of parallel symbolic manipulation for power distribution management and dynamic load planner/scheduler in the simulated Space Station environment. This on-going work will also explore other cooperative-task-performing models as alternatives which can evaluate inter and intra expert system communication mechanisms. It will serve as a testbed and a bench-marking tool for other Space Station expert subsystem communication and information exchange

  2. Distributed remote temperature monitoring system for INDUS-2 vacuum chambers

    Indus-2, a 2.5 GeV Synchrotron Radiation Source (SRS) at Indore has a large vacuum system. The vacuum envelope of Indus-2 ring comprises of 16 dipole chambers as vital parts. Each chamber has 4 photon absorbers and three beam line ports blanked with end flanges. Temperature monitoring of critical vacuum components during operation of Indus-2 ring is an important requirement. The paper discusses a distributed, 160 channel remote temperature monitoring system developed and deployed for this purpose using microcontroller based, modular Temperature Monitoring Units (TMU). The cabling has been extensively minimized using RS485 system and keeping trip relay contacts of all units in series. For ensuring proper signal conditioning of thermocouple outputs (K-type) and successful operation over RS485 bus, many precautions were taken considering the close proximity to the storage ring. We also discuss the software for vacuum chamber temperature monitoring and safety system. The software developed using LabVIEW, has important features like modularity, client-server architecture, local and global database logging, alarms and trips, event and error logging, provision of various important configurations, communications handling etc. (author)

  3. An Effective Distributed Model for Power System Transient Stability Analysis

    MUTHU, B. M.

    2011-08-01

    Full Text Available The modern power systems consist of many interconnected synchronous generators having different inertia constants, connected with large transmission network and ever increasing demand for power exchange. The size of the power system grows exponentially due to increase in power demand. The data required for various power system applications have been stored in different formats in a heterogeneous environment. The power system applications themselves have been developed and deployed in different platforms and language paradigms. Interoperability between power system applications becomes a major issue because of the heterogeneous nature. The main aim of the paper is to develop a generalized distributed model for carrying out power system stability analysis. The more flexible and loosely coupled JAX-RPC model has been developed for representing transient stability analysis in large interconnected power systems. The proposed model includes Pre-Fault, During-Fault, Post-Fault and Swing Curve services which are accessible to the remote power system clients when the system is subjected to large disturbances. A generalized XML based model for data representation has also been proposed for exchanging data in order to enhance the interoperability between legacy power system applications. The performance measure, Round Trip Time (RTT is estimated for different power systems using the proposed JAX-RPC model and compared with the results obtained using traditional client-server and Java RMI models.

  4. Distributed information-processing system with voice control based on OS Android

    E. V. Apolonov

    2012-12-01

    Full Text Available Introduction: Trends of increase of ACS and AIS and their use in everyday life are discussed. The need a voice mode of human interaction with AIS is mentioned. Noticed that network integration of AIS allows to combine their resources and contributes to progress in speech recognition. The emergence of smart phones and their widespread use is the desire to use them as personal voice terminals for access to distributed information networks. Main part: Possibility of use of Android-based personal portable mobile devices (PPMD like terminals and like autonomous units, as well as possibility of use of Windows-based stationary PC like servers of distributed data-processing system (DDPS with voice control are considered. Criteria for selection of PPMD and OS of client terminals, as well as requirements DDPS and its structure are formulated. Concept of building of DDPS by "client - server" and "a lot of clients — many servers" technologies are submitted. Concept of a PPMD virtual interface and server virtual interface are offered. Communication between threads within the process of the PPMD virtual interface of client terminal and the interaction between the processes of the client and server in the autonomous mode, as well as in the DDPS mode are considered. The results of experimental tests of the prototype of DDPS when exchanging data between Windows and Android clients, and Windows Server are running; the accuracy and reliability of embedded solutions and scalability of DDPS are confirmed. Conclusions: Modern PPMD on Android OS with can be used as terminal devices for construction on the basis of their different specialized voice control DDPS with technology "client - server" and "a lot of customers - many servers". Unification APIs of PPMD with different OS can be done by implementing a virtual PPMD interface. Exchanging data between processes of DDPS better sell through technology Berkeley sockets, which are supported by most modern operating

  5. Study on the Distributed Collaborative Model and Application%分布式协作模型及应用研究

    张全海; 施鹏飞

    2003-01-01

    With the development of the Web technology, the application environment has acquired many new characters such as dynamic, openness, distribution and information uncertainty. The processing mode of application systems Is more complicated than ever. For example, It requires application systems to have more commumty processing ability, interactive ability, distributed processing ability and collaborative ability. Accordingly the research and development of the computer application system transited from client/server information processing system into distributed collaborative processing system based on Web. Especially in the environment where the information and resources are highly distributed, the accomplishment of complicated tasks is dependent more on the resources coordination, information sharing and coordinator collaboration. The collaboration is one aspect of the group behavior and its goal is to provide a optimal method to utilize the resource through the information interaction and to solve the task which couldn't be accomplished by each coordinator alone and get the more total benefits than the sum of each benefit. The collaboration problem is the important one for distributed tasks processing. This paper surveys. the research and application status of distributed collaborative models and several representative architectures of distributed collaborative processing are proposed. However, the existing problems and the future researching direction are presented.

  6. The design of the Alba control system: a cost-effective distributed hardware and software architecture

    Alba is a third generation synchrotron located in Spain. The control system of Alba is highly distributed. The hardware infrastructure for the control system includes in the order of 350 racks, 17000 cables and 6300 equipment. More than 150 disk-less industrial computers, distributed in the service area and 30 multi-core servers in the data center, manage several thousands of process variables. In this environment, the software client server model, with fast and reliable communications, was imposed. Tango plays an important role. It is a big success story of the Tango Collaboration, where a complete middle-ware schema is available 'off the shelf'. Moreover, Tango has been effectively complemented with Sardana SCADAs (Supervision Control And Data Acquisition), a great development effort shared and used in several other institutes. The whole installation has been coordinated from the beginning with a complete cabling and equipment database, where all the equipment, cables, connectors are described and inventoried. The cabling database, or ccdb can be considered as the core of the installation since it soon turned out into a central repository for the whole installation. This paper explains the design and the architecture of the control system, describes the tools and justifies the choices made. Finally, it presents and analyzes the figures regarding cost and performances. (authors)

  7. Post-processing in cardiovascular computed tomography. Performance of a client server solution versus a stand-alone solution; Bildnachverarbeitung in der kardiovaskulaeren Computertomografie. Performance von Client-Server- versus Einzelplatzloesung

    Luecke, C.; Foldyna, B.; Andres, C.; Grothoff, M.; Nitzsche, S.; Gutberlet, M.; Lehmkuhl, L. [Leipzig Univ. - Herzzentrum (Germany). Abt. fuer Diagnostische und Interventionelle Radiologie; Boehmer-Lasthaus, S. [Siemens Healthcare Sector, Erlangen (Germany). Imaging and Therapy Div.

    2014-12-15

    Purpose: To compare the performance of server-based (CSS) versus stand-alone post-processing software (ES) for the evaluation of cardiovascular CT examinations (cvCT) and to determine the crucial steps. Data of 40 patients (20 patients for coronary artery evaluation and 20 patients prior to transcatheter aortic valve implantation [TAVI]) were evaluated by 5 radiologists with CSS and ES. Data acquisition was performed using a dual-source 128-row CT unit (SOMATOM Definition Flash, Siemens, Erlangen, Germany) and a 64-row CT unit (Brilliance 64, Philips, Hamburg, Germany). The following workflow was evaluated: Data loading, aorta and coronary segmentation, curved multiplanar reconstruction (cMPR) and 3 D volume rendering technique (3D-VRT), measuring of coronary artery stenosis and planimetry of the aortic annulus. The time requirement and subjective quality for the workflow were evaluated. The coronary arteries as well as the TAVI data could be evaluated significantly faster with CSS (5.5 ± 2.9 min and 8.2 ± 4.0 min, respectively) than with ES (13.9 ± 5.2 min and 15.2 ± 10.9 min, respectively, p = 0.01). Segmentation of the aorta (CSS: 1.9 ± 2.0 min, ES: 3.7 ± 3.3 min), generating cMPR of coronaries (CSS: 0.5 ± 0.2 min, ES: 5.1 ± 2.6 min), aorta and iliac vessels (CSS: 0.5 ± 0.4 min and 0.4 ± 0.4 min, respectively, ES: 1.6 ± 0.7 min and 2.8 ± 3 min, respectively) could be performed significantly faster with CSS than with ES with higher quality of cMPR, measuring of coronary stenosis and 3D-VRT (p < 0.05). Evaluation of cvCT can be accomplished significantly faster and better with CSS than with ES. The segmentation remains the most time-consuming workflow step, so optimization of segmentation algorithms could improve performance even further.

  8. Application of the distributed genetic algorithm for in-core fuel optimization problems under parallel computational environment

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors. A basic concept of DGA follows that of the conventional genetic algorithm (GA). However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent ''islands'' and evolves them in each island. Communications between islands, i.e. migrations of some candidates between islands are performed with a certain period. Since candidates of solutions independently evolve in each island while accepting different genes of migrants, premature convergence in the conventional GA can be prevented. Because many candidate loading patterns should be evaluated in GA or DGA, the parallelization is efficient to reduce turn around time. Parallel efficiency of DGA was measured using our optimization code and good efficiency was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization) module and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the conventional GA was compared in the test problem and DGA provided better optimization results than the conventional GA. (author)

  9. Application of Windows Socket Technique to Communication Process of the Train Diagram Network System Based on Client/Server Structure

    2001-01-01

    This paper is focused on the technique for design and realization of the process communications about the computer-aided train diagram network system. The Windows Socket technique is adopted to program for the client and the server to create system applications and solve the problems of data transfer and data sharing in the system.

  10. Post-processing in cardiovascular computed tomography. Performance of a client server solution versus a stand-alone solution

    Purpose: To compare the performance of server-based (CSS) versus stand-alone post-processing software (ES) for the evaluation of cardiovascular CT examinations (cvCT) and to determine the crucial steps. Data of 40 patients (20 patients for coronary artery evaluation and 20 patients prior to transcatheter aortic valve implantation [TAVI]) were evaluated by 5 radiologists with CSS and ES. Data acquisition was performed using a dual-source 128-row CT unit (SOMATOM Definition Flash, Siemens, Erlangen, Germany) and a 64-row CT unit (Brilliance 64, Philips, Hamburg, Germany). The following workflow was evaluated: Data loading, aorta and coronary segmentation, curved multiplanar reconstruction (cMPR) and 3 D volume rendering technique (3D-VRT), measuring of coronary artery stenosis and planimetry of the aortic annulus. The time requirement and subjective quality for the workflow were evaluated. The coronary arteries as well as the TAVI data could be evaluated significantly faster with CSS (5.5 ± 2.9 min and 8.2 ± 4.0 min, respectively) than with ES (13.9 ± 5.2 min and 15.2 ± 10.9 min, respectively, p = 0.01). Segmentation of the aorta (CSS: 1.9 ± 2.0 min, ES: 3.7 ± 3.3 min), generating cMPR of coronaries (CSS: 0.5 ± 0.2 min, ES: 5.1 ± 2.6 min), aorta and iliac vessels (CSS: 0.5 ± 0.4 min and 0.4 ± 0.4 min, respectively, ES: 1.6 ± 0.7 min and 2.8 ± 3 min, respectively) could be performed significantly faster with CSS than with ES with higher quality of cMPR, measuring of coronary stenosis and 3D-VRT (p < 0.05). Evaluation of cvCT can be accomplished significantly faster and better with CSS than with ES. The segmentation remains the most time-consuming workflow step, so optimization of segmentation algorithms could improve performance even further.

  11. Development of a Personal Digital Assistant (PDA) based client/server NICU patient data and charting system.

    Carroll, A. E.; Saluja, S.; Tarczy-Hornoch, P.

    2001-01-01

    Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exis...

  12. Intelligent self-configuring client-server analysis software for high-resolution X and gamma-ray spectrometry

    Buckley, W.M.; Carlson, J.

    1995-07-01

    The Safeguards Technology Program at the Lawrence Livermore National Laboratory is developing isotopic analysis software that is constructed to be adaptable to a wide variety of applications and requirements. The MGA++ project will develop an analysis capability based on an architecture consisting of a set of tools that can be configured by an executive to perform a specific task. The software will check the results or progress of an analysis and change assumptions and methodology as required to arrive at an optimum analysis. The software is intended to address analysis needs that arise from material control and accountability, treaty verification, complex reconfiguration and environmental clean-up applications.

  13. Design and implementation of a Client-Server System for Acquiring Beam Intensity Data from High Energy Accelerators at CERN

    Topaloudis, A; Bellas, N; Jensen, L

    The world’s largest research center in the domain of High Energy Physics (HEP) is the European Organization for Nuclear Research (CERN) whose main goal is to accelerate particles through a sequence of accelerators – accelerator complex – and bring them into collision in order to study the fundamental elements of matter and the forces acting between them. For controlling the accelerator complex, CERN needs several diagnostic tools to provide information about the beam’s attributes and one such system is the Fast Beam Current Transformer (FBCT) measuring system that provides bunch-by-bunch and total beam intensity information. The current hardware and firmware of the FBCT system has certain issues and lacks diagnostics as a lot of the calculations are done in an FPGA. In order to improve on this, the firmware was redesigned and simplified in order to increase its capabilities and provide the base of a unified FBCT measuring system that could be installed in several of CERN’s accelerator complex’s pa...

  14. Providing QoS for Networked Peers in Distributed Haptic Virtual Environments

    Alan Marshall

    2008-01-01

    Full Text Available Haptic information originates from a different human sense (touch, therefore the quality of service (QoS required to support haptic traffic is significantly different from that used to support conventional real-time traffic such as voice or video. Each type of network impairment has different (and severe impacts on the user's haptic experience. There has been no specific provision of QoS parameters for haptic interaction. Previous research into distributed haptic virtual environments (DHVEs have concentrated on synchronization of positions (haptic device or virtual objects, and are based on client-server architectures. We present a new peer-to-peer DHVE architecture that further extends this to enable force interactions between two users whereby force data are sent to the remote peer in addition to positional information. The work presented involves both simulation and practical experimentation where multimodal data is transmitted over a QoS-enabled IP network. Both forms of experiment produce consistent results which show that the use of specific QoS classes for haptic traffic will reduce network delay and jitter, leading to improvements in users' haptic experiences with these types of applications.

  15. Web architecture for the remote browsing and analysis of distributed medical images and data.

    Masseroli, M; Pinciroli, F

    2001-01-01

    To provide easy retrieval, integration and evaluation of multimodal medical images and data in a web browser environment, distributed application technologies and Java programming were used to develop a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test data and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved bioimages, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for browsing, querying, visualizing and evaluating comprehensively medical images and records in all locations where they can need them - e.g. emergency, operating theaters, ward, or even outpatient clinics- the implemented prototype represents an important aid in providing more efficient diagnoses and medical treatments. PMID:11604703

  16. Distributed medical services within the ATM-based Berlin regional test bed

    Thiel, Andreas; Bernarding, Johannes; Krauss, Manfred; Schulz, Sandra; Tolxdorff, Thomas

    1996-05-01

    The ATM-based Metropolitan Area Network (MAN) of Berlin connects two university hospitals (Benjamin Franklin University Hospital and Charite) with the computer resources of the Technical University of Berlin (TUB). Distributed new medical services have been implemented and will be evaluated within the highspeed MAN of Berlin. The network with its data transmission rates of up to 155 Mbit/s renders these medical services externally available to practicing physicians. Resource and application sharing is demonstrated by the use of two software systems. The first software system is an interactive 3D reconstruction tool (3D- Medbild), based on a client-server mechanism. This structure allows the use of high- performance computers at the TUB from the low-level workstations in the hospitals. A second software system, RAMSES, utilizes a tissue database of Magnetic Resonance Images. For the remote control of the software, the developed applications use standards such as DICOM 3.0 and features of the World Wide Web. Data security concepts are being tested and integrated for the needs of the sensitive medical data. The highspeed network is the necessary prerequisite for the clinical evaluation of data in a joint teleconference. The transmission of digitized real-time sequences such as video and ultrasound and the interactive manipulation of data are made possible by Multi Media tools.

  17. Stampi: a message passing library for distributed parallel computing. User's guide, second edition

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on the MPI2 specification, and it realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Main features of Stampi are summarized as follows: (i) an automatic switch function between external- and internal communications, (ii) a message routing/relaying with a routing module, (iii) a dynamic process creation, (iv) a support of two types of connection, Master/Slave and Client/Server, (v) a support of a communication with Java applets. Indeed vendors implemented MPI libraries as a closed system in one parallel machine or their systems, and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, moreover on eight kinds of parallel machines, totally fourteen systems. Stampi provides us MPI communication functionality on them. This report describes mainly the usage of Stampi. (author)

  18. An Approach For Designing Distributed Real Time Database

    Dhuha Basheer Abdullah

    2010-09-01

    Full Text Available A distributed Real Time database system is a transaction processing system that is designed to handle workloads where transactions have service deadlines. The emphasis here is on satisfying the timing constraint of transactions (meet these deadlines, that is to process transactions before their deadlines expire and investigating the distributed databases. This paper produces a proposed system named ADRTDBS. In this work a prototype of client/server module and server/server module for distributed real time database has been designed. Server gets the data from direct user or a group of clients connected with it, analyze the request; and broad updating to all servers using 2PC (Two Phase Commit and executing the demand by using 2PL (Two Phase Locking. The proposed model does not concern with data only, but provide a synchronize replication, so the updating on any server is not saved unless broadening the updating on all servers by using 2PC, and 2PL protocols. The database on this proposed system is homogenous and depend on full replication to satisfy real time requirements. The transactions have been scheduled on the server by using a proposed algorithm named EDTDF (Earliest Data or Transaction Deadline First. This algorithm works to execute transactions that have smallest deadline at the beginning, either this deadline specific to the data or to the transaction itself. Implementing this algorithm helps to execute greater rate of transactions before their deadlines. In this work two measures of performance for this system (proposed model were been conducted; first, computing the Miss Ratio (rate of no. of executing transactions that miss their deadline; second, computing the CPU utilization (CPU utilization rate, by executing a set of transactions in many sessions.

  19. Distributed open environment for data retrieval based on pattern recognition techniques

    Pattern recognition methods for data retrieval have been applied to fusion databases for the localization and extraction of similar waveforms within temporal evolution signals. In order to standardize the use of these methods, a distributed open environment has been designed. It is based on a client/server architecture that supports distribution, interoperability and portability between heterogeneous platforms. The server part is a single desktop application based on J2EE (Java 2 Enterprise Edition), which provides a mature standard framework and a modular architecture. It can handle transactions and concurrency of components that are deployed on JETTY, an embedded web container within the Java server application for providing HTTP services. The data management is based on Apache DERBY, a relational database engine also embedded on the same Java based solution. This encapsulation allows hiding of unnecessary details about the installation, distribution, and configuration of all these components but with the flexibility to create and allocate many databases on different servers. The DERBY network module increases the scope of the installed database engine by providing traditional Java database network connections (JDBC-TCP/IP). This avoids scattering several database engines (a unique embedded engine defines the rules for accessing the distributed data). Java thin clients (Java 5 or above is the unique requirement) can be executed in the same computer than the server program (for example a desktop computer) but also server and client software can be distributed in a remote participation environment (wide area networks). The thin client provides graphic user interface to look for patterns (entire waveforms or specific structural forms) and display the most similar ones. This is obtained with HTTP requests and by generating dynamic content (servlets) in response to these client requests.

  20. Distributed Open Environment for Data Retrieval based on Pattern Recognition Techniques

    Full text of publication follows: Pattern recognition methods for data retrieval have been applied to fusion databases for the localization and extraction of similar waveforms within temporal evolution signals. In order to standardize the use of these methods, a distributed open environment has been designed. It is based on a client/server architecture that supports distribution, inter-operability and portability between heterogeneous platforms. The server part is a single desktop application based on J2EE, which provides a mature standard framework and a modular architecture. It can handle transactions and competition of components that are deployed on JETTY, an embedded web container within the Java server application for providing HTTP services. The data management is based on Apache DERBY, a relational database engine also embedded on the same Java based solution. This encapsulation allows concealment of unnecessary details about the installation, distribution, and configuration of all these components but with the flexibility to create and allocate many databases on different servers. The DERBY network module increases the scope of the installed database engine by providing traditional Java database network connections (JDBC-TCP/IP). This avoids scattering several database engines (a unique embedded engine defines the rules for accessing the distributed data). Java thin clients (Java 5 or above is the unique requirement) can be executed in the same computer than the server program (for example a desktop computer) but also server and client software can be distributed in a remote participation environment (wide area networks). The thin client provides graphic user interface to look for patterns (entire waveforms or specific structural forms) and display the most similar ones. This is obtained with HTTP requests and by generating dynamic content (servlets) in response to these client requests. (authors)

  1. Interoperability between .Net framework and Python in Component way

    M. K. Pawar; Ravindra Patel; Dr. N. S. Chaudhari

    2013-01-01

    The objective of this work is to make interoperability of the distributed object based on CORBA middleware technology and standards. The distributed objects for the client-server technology are implemented in C#.Net framework and the Python language. The interoperability result shows the possibilities of application in which objects can communicate in different environment and different languages. It is also analyzing that how to achieve client-server communication in heterogeneous environmen...

  2. Application of the distributed genetic algorithm for loading pattern optimization problems

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors (PWR). Due to stiff nature of the loading pattern optimizations (e.g. multi-modality and non-linearity), stochastic methods like the simulated annealing or the genetic algorithm (GA) are widely applied for these problems. A basic concept of DGA is based on that of GA. However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent 'islands' and evolves them in each island. Migrations of some candidates are performed among islands with a certain period. Since candidates of solutions independently evolve in each island with accepting different genes of migrants from other islands, premature convergence in the traditional GA can be prevented. Because many candidate loading patterns should be evaluated in one generation of GA or DGA, the parallelization in these calculations works efficiently. Parallel efficiency was measured using our optimization code and good load balance was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization module) and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the traditional GA was compared in the test problem and DGA provided better optimization results than the traditional GA. (author)

  3. A Permutation Gigantic Issues in Mobile Real Time Distributed Database : Consistency & Security

    Gyanendra Kr. Gupta

    2011-02-01

    Full Text Available Several shape of Information System are broadly used in a variety of System Models. With the rapid development of computer network, Information System users concern more about data sharing in networks. In conventional relational database, data consistency was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but can not update it. If the traditional consistency control method has been used yet, the system’s concurrency will be inadequately influenced. So there are many new necessities for the consistency control and security in Mobile Real Time Distributed Database (MRTDDB. The problem not limited only to type of data (e.g. mobile or real-time databases. There are many aspects of data consistency problems in MRTDDB, such as inconsistency between characteristic and type of data; the nconsistency of topological relations after objects has been modified. In this paper, many cases of consistency are discussed. As the mobile computing becomes well-liked and the database grows with information sharing security is a big issue for researchers. Mutually both Consistency and Security of data is a big confront for esearchers because whenever the data is not consistent and secure no maneuver on the data (e.g. transaction is productive. It becomes more and more crucial when the transactions are used in on-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for consistency and security of databases. Traditional Database Security has focused primarily on creating user accounts and managing user rights to database objects. But in the mobility and drifting computing uses this database creating a new prospect for research. The wide spread use of databases over the web, heterogeneous client-server architectures,application servers, and networks creates a critical need to

  4. Consistency and Security in Mobile Real Time Distributed Database (MRTDDB): A Combinational Giant Challenge

    Gupta, Gyanendra Kr.; Sharma, A. K.; Swaroop, Vishnu

    2010-11-01

    Many type of Information System are widely used in various fields. With the hasty development of computer network, Information System users care more about data sharing in networks. In traditional relational database, data consistency was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but can not update it. If the traditional consistency control method has been used yet, the system's concurrency will be inadequately influenced. So there are many new necessities for the consistency control and security in MRTDDB. The problem not limited only to type of data (e.g. mobile or real-time databases). There are many aspects of data consistency problems in MRTDDB, such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects has been modified. In this paper, many cases of consistency are discussed. As the mobile computing becomes well liked and the database grows with information sharing security is a big issue for researchers. Consistency and Security of data is a big challenge for researchers because when ever the data is not consistent and secure no maneuver on the data (e.g. transaction) is productive. It becomes more and more crucial when the transactions are used in non-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for consistency and security of databases. Traditional Database Security has focused primarily on creating user accounts and managing user privileges to database objects. But in the mobility and nomadic computing uses these database creating a new opportunities for research. The wide spread use of databases over the web, heterogeneous client-server architectures, application servers, and networks creates a critical need to amplify this focus. In this paper we also discuss an overview of the new and old

  5. Scalable Scientific Data Mining in Distributed, Peer-to-Peer Environments

    Borne, K. D.; Kargupta, H.; Das, K.; Griffin, W.; Giannella, C.

    2008-12-01

    reduced to a hyperplane in lower dimensions. Since the attributes which define the fundamental plane span two data repositories (SDSS and 2MASS) instead of one, we focus on cross-matching them through the NVO, and we then apply distributed data mining algorithms to analyze these data distributed over a large number of compute nodes. Distributed data mining techniques will not require scientists to download massive chunks of data for scientific discovery and will thus enable them to use distributed database queries across distributed virtual tables of de-centralized, joined and integrated sky survey catalogs. This will make the existing client-server-based astronomy data services richer by providing the power of distributed and P2P data mining technology.

  6. Development of Client/Server Model Information System with ODBC%利用ODBC技术开发客户机/服务器应用系统

    李春旺; 孙劲松

    1998-01-01

    在分析图书馆应用系统需求发生变化之后,结合Client/Server模式介绍了ODBC的结构、原理,探讨在Client/Server体系结构中引入ODBC技术的实现方法,并提出了可互操作的图书馆应用系统开发方案.

  7. Thin-Client/Server计算模式在社区图书馆中的应用%The Application in the Community Library with Thin-Client/Server

    赵秀丽; 杨静; 马爱华; 秦梅素

    2003-01-01

    主要阐述了Thin-Client/Server计算模式在社区图书馆建立电子阅览室中的应用,以及Thin-Client/Server计算模式的概念、工作模式、技术特点,展望Thin-Client/Server计算模式在未来社区图书馆发展中的应用前景.

  8. Client/Server计算的协作模型及其开发工具%Coordination Model and Development Tool of Client/Server Computing

    黄海宁; 孙伟伟; 夏宽理; 赵文耘; 钱乐秋

    1998-01-01

    该文提出了一个Client/Server计算的协作模型,可支持Client激活多个子服务以及多个子服务之间的协作交互,并在此基础上,设计了一个基于代理的Client/Server开发工具.

  9. Client/Server计算的协作模型及其开发工具%COORDINATION MODEL & DEVELOPMENT TOOLS FOR Client/Server COMPUTING

    黄海宁; 孙伟伟; 夏宽理

    2000-01-01

    基本的Client/Server模型的核心是Client向Server请求单个独立的服务,为处理ent请求复杂服务的情况,本文提出了一个Client/Server计算的协作模型,可支持Client激活多个子服务以及多个子服务之间的协作交互,并在此基础上设计了一个基于代理的Client/Server开发工具.

  10. Thin-Client/Server架构在图书馆中的应用%Application of Thin- Client/Server in Library

    陈春芳

    2005-01-01

    结合Thin-Client/Server架构在图书馆信息系统的实际应用情况,分析图书馆应用自动化的技术要求及Thin-Client/Server架构的优缺点.针对该架构的优缺点搞好终端服务器运行、客户端设备使用以及网络连接的管理,有利于进一步提高Thin-Client/Server架构在图书馆的功用.

  11. Performance Modeling in Client Server Network Comparison of Hub, Switch and Bluetooth Technology Using Markov Algorithm and Queuing Petri Nets with the Security Of Steganography

    V.B.Kirubanand

    2010-03-01

    Full Text Available The main theme of this paper is to find the performance of the Hub, Switch and Bluetooth technology using the Queueing Petri-net model and the markov algorithm with the security of Steganography. This paper mainly focuses on comparis on of Hub, switch and Bluetooth technologies in terms of service rate and arrival rate by using Markov algorithm (M/M(1,b/1. When comparing the service rates from the Hub network, switch network and the Bluetooth technology, it has been found that the service rate from the Bluetooth technology is very efficient for implementation. The values obtained from the Bluetooth technology can used for calculating the performance of other wireless technologies. QPNs facilitate the integration of both hardware and software aspects of the system behavior in the improved model. The purpose of Steganography is to send the hidden the information from one system to another through the Bluetooth technology with security measures. Queueing Petri Nets are very powerful as a performance analysis and prediction tool. By demonstrating the power of QPNs as a modeling paradigm in further fore coming technologies we hope to motivate further research in this area.

  12. Upgrading a TCABR Data Analysis and Acquisition System for Remote Participation Using Java, XML, RCP and Modern Client/Server Communication/Authentication

    Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way of control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and it is difficult to follow the work remotely with private system. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are very complex and with a large variability of components, requirement and specification solutions need to be flexible and modular, independent from operating system and computers architecture. To describe and to organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers use Remote Procedure Call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows developing easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration and the Web application allows a simple Graphical User Interface (GUI) access. TCABR tokamak team collaborating with the CFN (Nuclear Fusion Center, Technical University of Lisbon) are implementing this Remote Participation technologies that are going to be tested at the Joint Experiment on TCABR (TCABR-JE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks', during the period of 4. to 15. May 2009. This document is made of a poster and its abstract. (author)

  13. Upgrading a TCABR data analysis and acquisition system for remote participation using Java, XML, RCP and modern client/server communication/authentication

    The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the eXtensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on 'Joint Research Using Small Tokamaks'.

  14. Modbus协议客户/服务器通讯模型的实现%Implement of Client/Server Model Using Modbus Protol

    吴爱国; 刘屏凡

    2005-01-01

    Modbus/TCP协议是Modbus协议族在工业以太网上的解决方案.由于其简单性,以及能和现有的其它Modbus解决方案兼容,所以得到广泛采用.文章依据Modbus/TCP协议的要求,介绍了如何利用套接字和多线程机制来实现Modbus协议客户/服务器通讯模型.

  15. VB客户——服务器(TCP/IP)编程%Using VB Design Client-Server(TCP/IP) Network Program

    陶运道

    2002-01-01

    利用VB的 WinSock 控件可以与远程计算机建立连接,并通过用户数据文报协议(UDP)或者传输控制协议(TCP)进行数据交换.TCP/IP协议是Internet最重要的协议.VB提供了WinSock控件,用于在TCP/IP的基础上进行网络通信.本文介绍如何用VB来实现TCP/IP网络编程.

  16. Seed-a distributed data base architecture for global management of steam-generator inspection data

    This paper deals with a data management system - called SEED (Steam-generator Eddy-current Expert Database) for global handling of SG (steam generator) tube inspection data in nuclear power plants. The SEED integrates all stages in SG tube inspection process and supports all data such as raw eddy current data, inspection history data, SG tube information, etc. SEED implemented under client/server computing architecture for supporting LAN/WAN based graphical user interface facilities using WWW programming tools. (author)

  17. Integration of the Ovation Based Distributed Turbine Control System into the Existing Process Information System and Full Scope Simulator at NPP Krsko

    Programmable Digital Electro Hydraulic System (PDEH) is Turbine Control System (TCS), built on Emerson OVATION Distributed Control System (DCS) platform and installed by the Westinghouse Electric Company at the Krsko Nuclear Power Plant as the replacement for the DEH Mod II turbine control system. Core of the PDEH system consist of three pairs of redundant controllers (ETS, OA/OPC and ATC/MSR) configured for the Turbine Generator (TG) set protection, control and monitoring functions. Existing serial data link between replaced DEH Mod II and Process Information System (PIS) was removed and replaced with redundant bi-directional Ethernet TCP/IP data link via two Data Link servers in client-server architecture configuration. All hardwired signals and some of the important calculated signals are being transferred from PDEH to PIS. Main purpose of PIS data link is trending utilization at the existing PIS workstations with pre-configured trend groups and centralized data archiving. Most of the PDEH display screens (mimics) were also replicated on the PIS platform, so that TG set monitoring and operation overview can be performed over the PIS network as well as over the Process Computer Network (PCN) with PMSNT-view utility. The simulator is implemented using a stimulated Windows based Ovation platform and a SGI IRIX based Plant Model Computer (PMC) using the L-3 MAPPS simulation software platform. Two PDEH stimulated systems are installed at the Krsko Full Scope Simulator (KFSS), one for foreground and another for background simulation. Stimulated PDEH hardware is essentially identical to that installed in the plant with the exception of hardware redundancy, isolation features and interface with physical plant I/O. The Ovation control logic sheets are executed with virtual controllers hosted on a simulator specific Virtual Controller Host (VCH) Workstation. The data interface between the simulator Ovation system and the PMC is accomplished through the Ethernet

  18. Abstracting object interactions using composition filters

    Aksit, Mehmet; Wakita, Ken; Bosch, Jan; Bergmans, Lodewijk; Yonezawa, Akinori

    1994-01-01

    It is generally claimed that object-based models are very suitable for building distributed system architectures since object interactions follow the client-server model. To cope with the complexity of today's distributed systems, however, we think that high-level linguistic mechanisms are needed to

  19. Improving Peer-to-Peer Video Systems

    Petrocco, R.P.

    2016-01-01

    Video Streaming is nowadays the Internet’s biggest source of consumer traffic. Traditional content providers rely on centralised client-server model for distributing their video streaming content. The current generation is moving from being passive viewers, or content consumers, to active content pr

  20. The ABC Adaptive Fusion Architecture

    Bunde-Pedersen, Jonathan; Mogensen, Martin; Bardram, Jakob Eyvind

    2006-01-01

    Contemporary distributed collaborative systems tend to utilizeeither a client-server or a pure peer-to-peer paradigm. Aclient-server solution may potentially spawn direct connectionsbetween the clients to offload the server thereby creatinga hybrid architecture. A pure peer-to-peer paradigmmay on...

  1. Facilitating Designer-Customer Communication in the World Wide Web.

    Tuikka, Tuomo; Salmela, Marko

    1998-01-01

    Presents WebShaman, an application built to demonstrate how a distributed virtual prototyping system with a client/server architecture, could support geographically distant designer/customer communication. Provides an overview of smart virtual prototyping; discusses implementation of synchronous collaboration via the World Wide Web; and assesses…

  2. Web-Based Course Management and Web Services

    Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.

    2004-01-01

    The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…

  3. Welcome to the World-Wide Web.

    Davis, Philip

    1995-01-01

    World Wide Web (WWW) is a multimedia, globally distributed, client/server information system based on hypertext. WWW browser software (Mosaic, Cello, and Samba) allows users to navigate hypertext documents via the Internet. Libraries are taking advantage of the fact that hypertext linked documents can be easily and inexpensively shared. (JMV)

  4. A Scalable Data-Distributed Algorithm for Volume Rendering on Parallel Virtual Machines

    邓俊辉; 唐泽圣

    2001-01-01

    An algorithm is presented for volume rendering in environments of parallel virtual machines. Toreduce the communication cost, as well as to guarantee the locality of all subtasks, the volume data is dividedinto and organized as a series of slices. The task subdivision algorithm produces acceptable load balancing bymaintaining and employing a database of performance indices. An asynchronous binary method merges allpartial images in O(Iogn) time. An efficient development platform based on TCP/IP and Socket standards canhelp parallelize various rendering algorithms on virtual machines. Our algorithm was implemented on thisplatform using the classical client/server paradigm. The scalabilities of both task size and host number weretested experimentally.``

  5. HTML5 WebSocket protocol and its application to distributed computing

    Muller, Gabriel L.

    2014-01-01

    HTML5 WebSocket protocol brings real time communication in web browsers to a new level. Daily, new products are designed to stay permanently connected to the web. WebSocket is the technology enabling this revolution. WebSockets are supported by all current browsers, but it is still a new technology in constant evolution. WebSockets are slowly replacing older client-server communication technologies. As opposed to comet-like technologies WebSockets' remarkable performances is a result of the p...

  6. Development of a web-based distributed interactive simulation (DIS) environment using javascript

    Hsiao, Chen-Fu

    2014-01-01

    This thesis investigated the current infrastructure for web-based simulations using the DIS network protocol. The main technologies studied were WebSockets, WebRTC and WebGL. This thesis sought readily available means to establish networks for interchanging DIS message (PDUs), so the WebSocket gateway server from Open-DIS project was used to construct a Client-Server structure and PeerJS API was used to construct a peer-to-peer structure. WebGL was used to create a scene and render 3D models ...

  7. 一个基于CORBA和移动智能体的分布式网管集成框架%A Distributed Network Management Architecture Based on CORBA and Mobile Agent

    吴刚; 王怀民; 吴泉源

    2001-01-01

    文中分析了当前主流网管系统在体系结构上的缺陷,阐述了分布对象技术(CORBA)与移动智能体技术各自在网管领域的应用方法与优势.结合这些优势,给出了一个基于CORBA和移动智能体的分布式网管集成框架,并进一步通过实验模拟证实了其可行性.%The increasing scale and complexity of the network are makingnetwork management more and more important. The wide-used network management systems based on SNMP or CMIP adopt a Client/Server paradigm and are characterized by centralization. Due to the simplicity of the manager/agent model, these traditional systems are widely used. But there are also many drawbacks coming with the simplicity and centralization. Analyzing the limitations of the traditional network management systems, the paper describes the benefits of using CORBA and Mobile Agent technology in this field. Common Object Request Broker Architecture (CORBA) comes from OMG for distributed object process and integration. With the distributed object platform ORB, interface definition language IDL, and the abundant common services of CORBA, an open network management system can be constructed easily. The architecture based on CORBA presented in this paper mainly addresses the integratebility, extensibility, reusability, and scalability of distributed network management. It benefits from the large amount of SNMP/CMIP devices and the mature management platform. At the same time, it provides not only an extensible application framework to process all kinds of changes quickly through CORBA's distributed object model, but also the independence of programming language though IDL. And it also does well in information sharing and interoperation between high-level services. A mobile agent is an active computing entity characterized by the autonomous, interactive and mobile properties. Due to the autonomous migration on the heterogeneous network, the mobile agent can agilely decentralize the management

  8. A Distributed Feature-based Environment for Collaborative Design

    Wei-Dong Li

    2003-02-01

    Full Text Available This paper presents a client/server design environment based on 3D feature-based modelling and Java technologies to enable design information to be shared efficiently among members within a design team. In this environment, design tasks and clients are organised through working sessions generated and maintained by a collaborative server. The information from an individual design client during a design process is updated and broadcast to other clients in the same session through an event-driven and call-back mechanism. The downstream manufacturing analysis modules can be wrapped as agents and plugged into the open environment to support the design activities. At the server side, a feature-feature relationship is established and maintained to filter the varied information of a working part, so as to facilitate efficient information update during the design process.

  9. Client/Server数据库结构、应用软件设计及其访问机制%Construction, Applications Design and Accessing Mechanism of Client/Server Database

    黄丽霞

    2000-01-01

    全面论述了客户机/服务器数据库系统体系结构的发展和主要特点,提出新一代数据库客户端与服务器端应用软件的设计任务,进一步探讨了访问数据库的多种技术和途径.

  10. Based on Technology of Client/Server Data Integrity Constraints Research and Application%基于Client/Server数据完整性约束的技术研究与应用

    鲁广英

    2010-01-01

    讨论基于Client/Server结构的数据完整性约束,必须建立完整性约束机制,探讨数据完整性约束及其如何实现.根据多年来开发基于Client/Server结构的信息管理系统的经验,并以SQL Server、VB为平台,介绍管理信息系统实现数据完整性约束的方法.

  11. NEWCOM:AN ARCHITECTURE DESCRIPTION LANGUAGE IN CLIENT/SERVER STYLE%客户/服务器风格的体系结构描述语言NEWCOM

    陆汝钤; 金芝; 刘璘; 蒋爱军; 赖辉旻

    1998-01-01

    本文介绍一个客户/服务器风格的体系结构描述语言NEWCOM.它的抽象级别介于需求描述和第四代语言之间.NEWCOM程序从宏观上描述一个在客户/服务器环境下运行的MIS.它支持多种组件,并为这些组件提供了若干连接机制.NEWCOM涉及数据仓库技术及数据库变换技术,支持异构的网络结构和数据库平台,有助于实现企业的整体解决方案.NEWCOM具有通用性和开放性,既可作为软件人员的设计蓝图,又可直接为用户生成MIS系统.

  12. Secure Inter-Process Communication

    Valentin Radulescu

    2013-01-01

    This article reveals the necessity in modern distributed systems for authentication of a process running in a distributed system and to provide a secure channel for inter-process communication in which both the client authenticates to the server and the server authenticates to the client. The distributed system is a client-server system based on ENEA LINX inter-process communication framework. Enea LINX is a Linux open source project which allows processes to exchange information between seve...

  13. Abstracting object interactions using composition filters

    Aksit, Mehmet; Wakita, Ken; Bosch, Jan; Bergmans, Lodewijk; YONEZAWA, AKINORI

    1994-01-01

    It is generally claimed that object-based models are very suitable for building distributed system architectures since object interactions follow the client-server model. To cope with the complexity of today's distributed systems, however, we think that high-level linguistic mechanisms are needed to effectively structure, abstract and reuse object interactions. For example, the conventional object-oriented model does not provide high-level language mechanisms to model layered system architect...

  14. Interoperability between .Net framework and Python in Component way

    M. K. Pawar

    2013-01-01

    Full Text Available The objective of this work is to make interoperability of the distributed object based on CORBA middleware technology and standards. The distributed objects for the client-server technology are implemented in C#.Net framework and the Python language. The interoperability result shows the possibilities of application in which objects can communicate in different environment and different languages. It is also analyzing that how to achieve client-server communication in heterogeneous environment using the OmniORBpy IDL compiler and IIOP.NET IDLtoCLS mapping. The results were obtained that demonstrate the interoperability between .Net Framework and Python language. This paper also summarizes a set of fairly simple examples using some reasonably complex software tools.

  15. Performance Measurement of Cloud Computing Services

    Suakanto, Sinung; Suhono H. Supangkat; Suhardi 1); Saragih, Roberd

    2012-01-01

    Cloud computing today has now been growing as new technologies and new business models. In distributed technology perspective, cloud computing most like client-server services like web-based or web-service but it used virtual resources to execute. Currently, cloud computing relies on the use of an elastic virtual machine and the use of network for data exchange. We conduct an experimental setup to measure the quality of service received by cloud computing customers. Experimental setup done by...

  16. U.S. Marine specific software interoperability requirements of the AFATDS and IOS software suites

    Thome, Geoffrey D.

    2002-01-01

    Approved for public release; distribution is unlimited. The Marine Corps has several Tactical Combat Systems at the Infantry Division level and below. The Information-Operations Server Version 1 (IOS v. 1) is a command and control (C2) system with a client-server architecture that when networked offers the Common Operational Picture (COP). The client is called Command and Control Personal Computer (C2PC). IOS was designed primarily to support maneuver, and has its roots in the Navy's Joint...

  17. A Full Scope Nuclear Power Plant Training Simulator: Design and Implementation Experiences

    Pedro A. Corcuera

    2003-01-01

    This paper describes the development of a full scope training simulator for a Spanish nuclear power plant. The simulator is based on a client/server architecture that allows the distributed execution in a network with many users to participate in the same simulation. The interface was designed to support the interaction of the operators with the simulator through virtual panels supported by touch screens with high fidelity graphic displays. The simulation environment is described including th...

  18. The X-Files: Investigating Alien Performance in a Thin-client World

    Gunther, Neil J.

    2000-01-01

    Many scientific applications use the X11 window environment; an open source windows GUI standard employing a client/server architecture. X11 promotes: distributed computing, thin-client functionality, cheap desktop displays, compatibility with heterogeneous servers, remote services and administration, and greater maturity than newer web technologies. This paper details the author's investigations into close encounters with alien performance in X11-based seismic applications running on a 200-n...

  19. Security Based Service Oriented Architecture in Cloud Environment

    Asha N. Chaudhary,; Prof. Hitesh A. Bheda

    2014-01-01

    Service Oriented Architecture is appropriate model for distributed application development in the recent explosion of Internet services and cloud computing.SOA introduces new security challenges which are not present in the single hop client server architectures due to the involvement of multiple service providers in a service request. The interaction of independent services in SOA could break service policies. User in SOA system has no control what happens in the chain of ser...

  20. Techniques for multiple database integration

    Whitaker, Barron D

    1997-01-01

    Approved for public release; distribution is unlimited There are several graphic client/server application development tools which can be used to easily develop powerful relational database applications. However, they do not provide a direct means of performing queries which require relational joins across multiple database boundaries. This thesis studies ways to access multiple databases. Specifically, it examines how a 'cross-database join' can be performed. A case study of techniques us...

  1. Using object-oriented algebraic nets for the reverse engineering of Java programs: a case study

    di Marzo Serugendo, Giovanna; Guelfi, Nicolas

    1998-01-01

    The problem addressed in this paper is the following: how to use high-level Petri nets for the reverse engineering of implemented distributed applications. The paper presents a reverse engineering methodology applied on a real (simple) Java applet based client/server application. First, starting from the Java program, several abstraction steps are described using the CO-OPN/2 formal specification language. Then, the paper presents brand new research that studies property preservations during ...

  2. Security for Decentralised Service Location - Exemplified with Real-Time Communication Session Establishment

    Seedorf, Jan

    2013-01-01

    Decentralised Service Location, i.e. finding an application communication endpoint based on a Distributed Hash Table (DHT), is a fairly new concept. The precise security implications of this approach have not been studied in detail. More importantly, a detailed analysis regarding the applicability of existing security solutions to this concept has not been conducted. In many cases existing client-server approaches to security may not be feasible. In addition, to understand the necessity for s...

  3. A task-oriented modular and agent-based collaborative design mechanism for distributed product development

    Liu, Jinfei; Chen, Ming; Wang, Lei; Wu, Qidi

    2014-05-01

    The rapid expansion of enterprises makes product collaborative design (PCD) a critical issue under the distributed heterogeneous environment, but as the collaborative task of large-scale network becomes more complicated, neither unified task decomposition and allocation methodology nor Agent-based network management platform can satisfy the increasing demands. In this paper, to meet requirements of PCD for distributed product development, a collaborative design mechanism based on the thought of modularity and the Agent technology is presented. First, the top-down 4-tier process model based on task-oriented modular and Agent is constructed for PCD after analyzing the mapping relationships between requirements and functions in the collaborative design. Second, on basis of sub-task decomposition for PCD based on a mixed method, the mathematic model of task-oriented modular based on multi-objective optimization is established to maximize the module cohesion degree and minimize the module coupling degree, while considering the module executable degree as a restriction. The mathematic model is optimized and simulated by the modified PSO, and the decomposed modules are obtained. Finally, the Agent structure model for collaborative design is put forward, and the optimism matching Agents are selected by using similarity algorithm to implement different task-modules by the integrated reasoning and decision-making mechanism with the behavioral model of collaborative design Agents. With the results of experimental studies for automobile collaborative design, the feasibility and efficiency of this methodology of task-oriented modular and Agent-based collaborative design in the distributed heterogeneous environment are verified. On this basis, an integrative automobile collaborative R&D platform is developed. This research provides an effective platform for automobile manufacturing enterprises to achieve PCD, and helps to promote product numeralization collaborative R&D and

  4. Manufacturing Communication: DCOM-MMS-based Approach for Flexible Manufacturing System

    2005-01-01

    A design approach of manufacturing communication ispresented for flexible manufacturing system in this paper.The primary objective aims at making the flexible manufacturing control system provided with interoperability and rconfigurability.Based on describing manufacturing message specification (MMS) and distributed component object model (DCOM), a client/server manufacturing communication model is built with MMS standard and DCOM middleware, and thecommunication interfaces between MMS client and MMS server are designed with Microsoft interface definition language (MIDL) and abstract syntax notation one (ASN. 1) of MMS services. As a result, DCOM and MMS integration leads to such client/ server communication capabilities independent of different operating systems and manufacturing devices in flexible manufacturing automation environment. Finally, to verify the new design approach, a prototype system of robot control system has been implemented in MS 2000 Server/Professional Operating System and VC + + 6.0 Developer Environments.

  5. Using ssh as portal – The CMS CRAB over glideinWMS experience

    The User Analysis of the CMS experiment is performed in distributed way using both Grid and dedicated resources. In order to insulate the users from the details of computing fabric, CMS relies on the CRAB (CMS Remote Analysis Builder) package as an abstraction layer. CMS has recently switched from a client-server version of CRAB to a purely client-based solution, with ssh being used to interface with HTCondor-based glideinWMS batch system. This switch has resulted in significant improvement of user satisfaction, as well as in significant simplification of the CRAB code base and of the operation support. This paper presents the architecture of the ssh-based CRAB package, the rationale behind it, as well as the operational experience running both the client-server and the ssh-based versions in parallel for several months.

  6. Distributed Knight

    Hansen, Klaus Marius; Damm, Christian Heide

    2005-01-01

    An extension of Knight (2005) that support distributed synchronous collaboration implemented using type-based publish/subscribe......An extension of Knight (2005) that support distributed synchronous collaboration implemented using type-based publish/subscribe...

  7. Distribution center

    2004-01-01

    Distribution center is a logistics link fulfill physical distribution as its main functionGenerally speaking, it's a large and hiahly automated center destined to receive goods from various plants and suppliers,take orders,fill them efficiently,and deliver goods to customers as quickly as possible.

  8. Distributive luck

    Knight, C

    2012-01-01

    This article explores the Rawlsian goal of ensuring that distributions are not influenced by the morally arbitrary. It does so by bringing discussions of distributive justice into contact with the debate over moral luck initiated by Williams and Nagel. Rawls’ own justice as fairness appears to be incompatible with the arbitrariness commitment, as it creates some equalities arbitrarily. A major rival, Dworkin’s version of brute luck egalitarianism, aims to be continuous wi...

  9. Fuel distribution

    Tison, R.R.; Baker, N.R.; Blazek, C.F.

    1979-07-01

    Distribution of fuel is considered from a supply point to the secondary conversion sites and ultimate end users. All distribution is intracity with the maximum distance between the supply point and end-use site generally considered to be 15 mi. The fuels discussed are: coal or coal-like solids, methanol, No. 2 fuel oil, No. 6 fuel oil, high-Btu gas, medium-Btu gas, and low-Btu gas. Although the fuel state, i.e., gas, liquid, etc., can have a major impact on the distribution system, the source of these fuels (e.g., naturally-occurring or coal-derived) does not. Single-source, single-termination point and single-source, multi-termination point systems for liquid, gaseous, and solid fuel distribution are considered. Transport modes and the fuels associated with each mode are: by truck - coal, methanol, No. 2 fuel oil, and No. 6 fuel oil; and by pipeline - coal, methane, No. 2 fuel oil, No. 6 oil, high-Btu gas, medium-Btu gas, and low-Btu gas. Data provided for each distribution system include component makeup and initial costs.

  10. Distributed creativity

    Glaveanu, Vlad Petre

    This book challenges the standard view that creativity comes only from within an individual by arguing that creativity also exists ‘outside’ of the mind or more precisely, that the human mind extends through the means of action into the world. The notion of ‘distributed creativity’ is not commonly...... used within the literature and yet it has the potential to revolutionise the way we think about creativity, from how we define and measure it to what we can practically do to foster and develop creativity. Drawing on cultural psychology, ecological psychology and advances in cognitive science, this...... book offers a basic framework for the study of distributed creativity that considers three main dimensions of creative work: sociality, materiality and temporality. Starting from the premise that creativity is distributed between people, between people and objects and across time, the book reviews...

  11. Spatial distribution

    Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger

    2008-01-01

    , depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly...

  12. Quasihomogeneous distributions

    von Grudzinski, O

    1991-01-01

    This is a systematic exposition of the basics of the theory of quasihomogeneous (in particular, homogeneous) functions and distributions (generalized functions). A major theme is the method of taking quasihomogeneous averages. It serves as the central tool for the study of the solvability of quasihomogeneous multiplication equations and of quasihomogeneous partial differential equations with constant coefficients. Necessary and sufficient conditions for solvability are given. Several examples are treated in detail, among them the heat and the Schrödinger equation. The final chapter is devoted to quasihomogeneous wave front sets and their application to the description of singularities of quasihomogeneous distributions, in particular to quasihomogeneous fundamental solutions of the heat and of the Schrödinger equation.

  13. Distribution switchgear

    Stewart, Stan

    2004-01-01

    Switchgear plays a fundamental role within the power supply industry. It is required to isolate faulty equipment, divide large networks into sections for repair purposes, reconfigure networks in order to restore power supplies and control other equipment.This book begins with the general principles of the Switchgear function and leads on to discuss topics such as interruption techniques, fault level calculations, switching transients and electrical insulation; making this an invaluable reference source. Solutions to practical problems associated with Distribution Switchgear are also included.

  14. Damage Distributions

    Lützen, Marie

    2001-01-01

    The purpose of Task 2.2 of the HARDER project is according to the work package description: For various structural configurations of the struck ship and using the results of Task 2.1, the probability distributions for the damage location and size will be derived. The format will be similar to the...... damage statistics and bow height statistics for vessels in the world fleet. The proposals for the p-, r-, and v-factors have been compared to factors from current regulation by examples.......The purpose of Task 2.2 of the HARDER project is according to the work package description: For various structural configurations of the struck ship and using the results of Task 2.1, the probability distributions for the damage location and size will be derived. The format will be similar to the p...... between the damage location, the damage sizes and the main particulars of the struck vessel. From the numerical simulation and the analyse of the damage statistics it is found that the current formulation from the IMO SLF 43/3/2 can be used as basis for determination of the p-, r-, and v...

  15. MPWide: a light-weight library for efficient message passing over wide area networks

    Derek Groen

    2013-12-01

    Full Text Available We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (supercomputing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation.

  16. Structured peer-to-peer systems fundamentals of hierarchical organization, routing, scaling, and security

    Korzun, Dmitry

    2012-01-01

    The field of structured P2P systems has seen fast growth upon the introduction of Distributed Hash Tables (DHTs) in the early 2000s. The first proposals, including Chord, Pastry, Tapestry, were gradually improved to cope with scalability, locality and security issues. By utilizing the processing and bandwidth resources  of end users, the P2P approach enables high performance of data distribution which is hard to achieve with traditional client-server architectures. The P2P computing community is also being actively utilized for software updates to the Internet, P2PSIP VoIP, video-on-demand, an

  17. The Epicure Control System

    The Epicure Control System supports the Fermilab fixed target physics program. The system is distributed across a network of many different types of components. The use of multiple layers on interfaces for communication between logical tasks fits the client-server model. Physical devices are read and controlled using symbolic references entered into a database with an editor utility. The database system consists of a central portion containing all device information and optimized portions distributed among many nodes. Updates to the database are available throughout the system within minutes after being requested

  18. Developing Distributed Collaboration Systems at NASA: A Report from the Field

    Becerra-Fernandez, Irma; Stewart, Helen; Knight, Chris; Norvig, Peter (Technical Monitor)

    2001-01-01

    Web-based collaborative systems have assumed a pivotal role in the information systems development arena. While business to customers (B-to-C) and business to business (B-to-B) electronic commerce systems, search engines, and chat sites are the focus of attention, web-based systems span the gamut of information systems that were traditionally confined to internal organizational client server networks. For example, the Domino Application Server allows Lotus Notes (trademarked) uses to build collaborative intranet applications and mySAP.com (trademarked) enables web portals and e-commerce applications for SAP users. This paper presents the experiences in the development of one such system: Postdoc, a government off-the-shelf web-based collaborative environment. Issues related to the design of web-based collaborative information systems, including lessons learned from the development and deployment of the system as well as measured performance, are presented in this paper. Finally, the limitations of the implementation approach as well as future plans are presented as well.

  19. Product Distributions for Distributed Optimization. Chapter 1

    Bieniawski, Stefan R.; Wolpert, David H.

    2004-01-01

    With connections to bounded rational game theory, information theory and statistical mechanics, Product Distribution (PD) theory provides a new framework for performing distributed optimization. Furthermore, PD theory extends and formalizes Collective Intelligence, thus connecting distributed optimization to distributed Reinforcement Learning (FU). This paper provides an overview of PD theory and details an algorithm for performing optimization derived from it. The approach is demonstrated on two unconstrained optimization problems, one with discrete variables and one with continuous variables. To highlight the connections between PD theory and distributed FU, the results are compared with those obtained using distributed reinforcement learning inspired optimization approaches. The inter-relationship of the techniques is discussed.

  20. Distributed Computing: An Overview

    Md. Firoj Ali

    2015-07-01

    Full Text Available Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. Distributed computing systems offer the potential for improved performance and resource sharing. In this paper we have made an overview on distributed computing. In this paper we studied the difference between parallel and distributed computing, terminologies used in distributed computing, task allocation in distributed computing and performance parameters in distributed computing system, parallel distributed algorithm models, and advantages of distributed computing and scope of distributed computing.

  1. Control and communication system for plasma heating unit and power supply system of large helical device

    Development of the control and communication system for the plasma heating unit and DC power supply of Large Helical Device (LHD: an experimental machine for fusion science) has been continued. The system is composed of a distributed and concurrent client/server system by the use of several UNIX / OS-9 / Windows-NT workstations, and its sub-systems are controlled by PLC (Programmable Logic Controller), VME (Versa Module Europe) and own device. Almost all of its control systems are linked via Ethernet with IEEE802.3. and FDDI. Man-machine interface system and hardware / software of the control systems have been completed. (author)

  2. P2P Techniques for Decentralized Applications

    Pacitti, Esther

    2012-01-01

    As an alternative to traditional client-server systems, Peer-to-Peer (P2P) systems provide major advantages in terms of scalability, autonomy and dynamic behavior of peers, and decentralization of control. Thus, they are well suited for large-scale data sharing in distributed environments. Most of the existing P2P approaches for data sharing rely on either structured networks (e.g., DHTs) for efficient indexing, or unstructured networks for ease of deployment, or some combination. However, these approaches have some limitations, such as lack of freedom for data placement in DHTs, and high late

  3. Transaction management

    Chorafas, Dimitris N, Dr

    1998-01-01

    This book provides an essential update for experienced data processing professionals, transaction managers and database specialists who are seeking system solutions beyond the confines of traditional approaches. It provides practical advice on how to manage complex transactions and share distributed databases on client servers and the Internet. Based on extensive research in over 100 companies in the USA, Europe, Japan and the UK, topics covered include :ʺ the challenge of global transaction requirements within an expanding business perspectiveʺ how to handle long transactions and their consti

  4. The developing one door licensing service system based on RESTful oriented services and MVC framework

    Widiyanto, Sigit; Setyawan, Aris Budi; Tarigan, Avinanta; Sussanto, Herry

    2016-02-01

    The increase of the number of business impact on the increasing service requirements for companies and Small Medium Enterprises (SMEs) in submitting their license request. The service system that is needed must be able to accommodate a large number of documents, various institutions, and time limitations of applicant. In addition, it is also required distributed applications which is able to be integrated each other. Service oriented application fits perfectly developed along client-server application which has been developed by the Government to digitalize submitted data. RESTful architecture and MVC framework are embedded in developing application. As a result, the application proves its capability in solving security, transaction speed, and data accuracy issues.

  5. Rural telemedicine project in northern New Mexico

    Zink, S.; Hahn, H.; Rudnick, J.; Snell, J.; Forslund, D. [Los Alamos National Lab., NM (United States); Martinez, P. [Northern New Mexico Community Coll., Espanola, NM (United States)

    1998-12-31

    A virtual electronic medical record system is being deployed over the Internet with security in northern New Mexico using TeleMed, a multimedia medical records management system that uses CORBA-based client-server technology and distributed database architecture. The goal of the NNM Rural Telemedicine Project is to implement TeleMed into fifteen rural clinics and two hospitals within a 25,000 square mile area of northern New Mexico. Evaluation of the project consists of three components: job task analysis, audit of immunized children, and time motion studies. Preliminary results of the evaluation components are presented.

  6. Dual CPU Redundant Technique for Special Controller in Floating Nuclear Power Plant

    A special distributed control system (DCS) was developed to carry out the reactor control in a floating nuclear power plant, and dual CPU redundant technique was an important method to improve the DCS system reliability. The dual CPU redundant technique based on DeviceNet field bus was researched, in which two CPU units and some data I/O units were connected through a DeviceNet field bus, and DeviceNet client/server programs running in the CPU units implement the data synchronization between two CPU units and the redundant control over data I/O units. (authors)

  7. Replicated Data Management for Mobile Computing

    Douglas, Terry

    2008-01-01

    Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client-server

  8. IAX-Based Peer-to-Peer VoIP Architecture

    Lazzez, Amor; Fredj, Ouissem Ben; Slimani, Thabet

    2013-01-01

    Nowadays, Voice over IP (VoIP) constitutes a privileged field of service innovation. One benefit of the VoIP technology is that it may be deployed using a centralized or a distributed architecture. One of the most efficient approaches used in the deployment of centralized VoIP systems is based on the use of IAX (Inter-Asterisk Exchange), an open-source signaling/data exchange protocol. Even though they are currently widely used, client-server VoIP systems suffer from many weaknesses such as t...

  9. Next Generation Transport Phenomenology Model

    Strickland, Douglas J.; Knight, Harold; Evans, J. Scott

    2004-01-01

    This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.

  10. Upgrade of the Nuclotron extracted beam diagnostic subsystem

    The subsystem is intended for the Nuclotron extracted beam parameters measurements. Multiwire proportional chambers are used for transversal beam profiles measurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network. (authors)

  11. Protection of Distribution Systems with Distributed Generation

    Vassbotten, Kristian

    2015-01-01

    In recent years the amount of distributed generation(DG) in distribution systems have increased. This poses problems for the traditional protection scheme, with non-directional over-current relays and fuses. When DG is introduced the load flow in distribution systems are often reversed; there is a surplus of power on the radial. Present thesis seeks to determine the most beneficial protection scheme to use in distribution systems with DG. To investigate the impact of different relays and...

  12. The Status Quo and The Development Tendency for Development Platform of Basic Client/Server Structure with Fourth Generation Language%基于客户机/服务器结构的第四代语言开发平台的现状和发展趋势

    朱涛; 刘振娟

    1997-01-01

    4GL(第四代语言)开发平台已经成为当前开发MIS,特别是企业级基于C/S(客户机/服务器)结构的大型MIS的首选开发工具.作者从分析4GL的特性入手,介绍当前几种流行的4GL开发平台,并指出4GL的发展趋势.

  13. Bivariate discrete Linnik distribution

    Davis Antony Mundassery

    2014-10-01

    Full Text Available Christoph and Schreiber (1998a studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.

  14. Bivariate discrete Linnik distribution

    Davis Antony Mundassery; Jayakumar, K.

    2014-01-01

    Christoph and Schreiber (1998a) studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.

  15. On bivariate geometric distribution

    Jayakumar, K.; Davis Antony Mundassery

    2013-01-01

    Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  16. The Amoroso Distribution

    Crooks, Gavin E

    2010-01-01

    Herein, we review the properties of the Amoroso distribution, the natural unification of the gamma and extreme value distribution families. Over 50 distinct, named distributions (and twice as many synonyms) occur as special cases or limiting forms. Consequently, this single simple functional form encapsulates and systematizes an extensive menagerie of interesting and common probability distributions.

  17. Extended Poisson Exponential Distribution

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  18. STIS MAMA Fold Distribution

    Wheeler, Thomas

    2013-10-01

    The performance of MAMA microchannel plates can be monitored using a MAMA fold distribution procedure. The fold distribution provides a measurement of the distribution of charge cloud sizes incident upon the anode giving some measure of change in the pulse-height distribution of the MCP and, therefore, MCP gain. This proposal executes the same steps as the STIS MAMA Fold Distribution, Proposal 13149, as Cycle 20.

  19. Greening File Distribution: Centralized or Distributed?

    Verma, Kshitiz; Anta, Antonio Fernández; Rumín, Rubén Cuevas; Azcorra, Arturo

    2011-01-01

    Despite file-distribution applications are responsible for a major portion of the current Internet traffic, so far little effort has been dedicated to study file distribution from the point of view of energy efficiency. In this paper, we present a first approach at the problem of energy efficiency for file distribution. Specifically, we first demonstrate that the general problem of minimizing energy consumption in file distribution in heterogeneous settings is NP-hard. For homogeneous settings, we derive tight lower bounds on energy consumption, and we design a family of algorithms that achieve these bounds. Our results prove that collaborative p2p schemes achieve up to 50% energy savings with respect to the best available centralized file distribution scheme. Through simulation, we demonstrate that in more realistic cases (e.g., considering network congestion, and link variability across hosts) we validate this observation, since our collaborative algorithms always achieve significant energy savings with res...

  20. Distributed Data Management and Distributed File Systems

    Girone, Maria

    2015-12-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  1. Distributed Data Management and Distributed File Systems

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  2. STELAR: An experiment in the electronic distribution of astronomical literature

    Warnock, A.; Vansteenburg, M. E.; Brotzman, L. E.; Gass, J.; Kovalsky, D.

    1992-01-01

    STELAR (Study of Electronic Literature for Astronomical Research) is a Goddard-based project designed to test methods of delivering technical literature in machine readable form. To that end, we have scanned a five year span of the ApJ, ApJ Supp, AJ and PASP, and have obtained abstracts for eight leading academic journals from NASA/STI CASI, which also makes these abstracts available through the NASA RECON system. We have also obtained machine readable versions of some journal volumes from the publishers, although in many instances, the final typeset versions are no longer available. The fundamental data object for the STELAR database is the article, a collection of items associated with a scientific paper - abstract, scanned pages (in a variety of formats), figures, OCR extractions, forward and backward references, errata and versions of the paper in various formats (e.g., TEX, SGML, PostScript, DVI). Articles are uniquely referenced in the database by journal name, volume number and page number. The selection and delivery of articles is accomplished through the WAIS (Wide Area Information Server) client/server models requiring only an Internet connection. Modest modifications to the server code have made it capable of delivering the multiple data types required by STELAR. WAIS is a platform independent and fully open multi-disciplinary delivery system, originally developed by Thinking Machines Corp. and made available free of charge. It is based on the ISO Z39.50 standard communications protocol. WAIS servers run under both UNIX and VMS. WAIS clients run on a wide variety of machines, from UNIX-based Xwindows systems to MS-DOS and macintosh microcomputers. The WAIS system includes full-test indexing and searching of documents, network interface and easy access to a variety of document viewers. ASCII versions of the CASI abstracts have been formatted for display and the full test of the abstracts has been indexed. The entire WAIS database of abstracts is now

  3. Distributed Verification and Hardness of Distributed Approximation

    Sarma, Atish Das; Kor, Liah; Korman, Amos; Nanongkai, Danupon; Pandurangan, Gopal; Peleg, David; Wattenhofer, Roger

    2010-01-01

    We study the {\\em verification} problem in distributed networks, stated as follows. Let $H$ be a subgraph of a network $G$ where each vertex of $G$ knows which edges incident on it are in $H$. We would like to verify whether $H$ has some properties, e.g., if it is a tree or if it is connected. We would like to perform this verification in a decentralized fashion via a distributed algorithm. The time complexity of verification is measured as the number of rounds of distributed communication. In this paper we initiate a systematic study of distributed verification, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and $s-t$ cut verification. We then show applications of these results in deriving strong unconditional time lower bounds on the {\\em hardness of distributed approximation} for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut....

  4. Factorisations of distributive laws

    Kraehmer, Ulrich; Slevin, Paul

    2014-01-01

    Recently, Boehm and Stefan constructed duplicial (paracyclic) objects from distributive laws between (co)monads. Here we define the category of factorisations of a distributive law, show that it acts on this construction, and give some explicit examples.

  5. Ticks: Geographic Distribution

    ... abroad Borrelia miyamotoi Borrelia mayonii Geographic distribution of ticks that bite humans Recommend on Facebook Tweet Share ... and may be difficult to identify. American dog tick ( Dermacentor variabilis ) Where found: Widely distributed east of ...

  6. Hyperfinite Representation of Distributions

    J Sousa Pinto; R F Hoskins

    2000-11-01

    Hyperfinite representation of distributions is studied following the method introduced by Kinoshita [2, 3], although we use a different approach much in the vein of [4]. Products and Fourier transforms of representatives of distributions are also analysed.

  7. Asymmetric Parton Distributions

    Radyushkin, A V

    1997-01-01

    Applications of perturbative QCD to hard exclusive electroproduction processes in the Bjorken limit at small invariant momentum transfer t bring in a new type of parton distributions which have hybrid properties, resembling both the parton distribution functions and the distribution amplitudes. Their t-dependence is analogous to that of hadronic form factors. We discuss general properties of these new parton distributions, their relation to usual parton densities and the evolution equations which they satisfy.

  8. Ambiguous Proximity Distribution

    Wang, Quanquan; Li, Yongping

    2014-01-01

    Proximity Distribution Kernel is an effective method for bag-of-featues based image representation. In this paper, we investigate the soft assignment of visual words to image features for proximity distribution. Visual word contribution function is proposed to model ambiguous proximity distributions. Three ambiguous proximity distributions is developed by three ambiguous contribution functions. The experiments are conducted on both classification and retrieval of medical image data sets. The ...

  9. Distributed Robotics Education

    LUND, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distributed educational system as a valuable tool for introducing students to interactive parallel and distributed processing programming as the foundation for distributed robotics and human-robot interaction...

  10. Leadership for Distributed Teams

    De Rooij, J.P.G.

    2009-01-01

    The aim of this dissertation was to study the little examined, yet important issue of leadership for distributed teams. Distributed teams are defined as: “teams of which members are geographically distributed and are therefore working predominantly via mediated communication means on an interdepende

  11. Reproduction and Distribution.

    Greer, Michael

    1989-01-01

    This latest in a series of articles on the manager's role in an instructional design project highlights techniques for managing the reproduction and distribution of materials. Guidelines for orienting staff are suggested, a sample reproduction and distribution schedule is given, and a storage and distribution system is discussed. (LRW)

  12. A third generation object-oriented process model:roles and architectures in focus

    Kivistö, K

    2000-01-01

    Abstract This thesis examines and evaluates the Object-Oriented Client/Server (OOCS) model, a process model that can be used when IT organizations develop object-oriented client/server applications. In particular, it defines the roles in the development team and combines them into the process model. Furthermore, the model focuses on the client/server architecture, considering it explicitly. The model has been under construction for several years and it has been test...

  13. Verification of LHS distributions.

    Swiler, Laura Painton

    2006-04-01

    This document provides verification test results for normal, lognormal, and uniform distributions that are used in Sandia's Latin Hypercube Sampling (LHS) software. The purpose of this testing is to verify that the sample values being generated in LHS are distributed according to the desired distribution types. The testing of distribution correctness is done by examining summary statistics, graphical comparisons using quantile-quantile plots, and format statistical tests such as the Chisquare test, the Kolmogorov-Smirnov test, and the Anderson-Darling test. The overall results from the testing indicate that the generation of normal, lognormal, and uniform distributions in LHS is acceptable.

  14. DISTRIBUTED QUERY OPTIMIZATION

    Nicoleta IACOB

    2010-12-01

    Full Text Available The need for the distributed systems has been determined by the type of business developed by companies with offices geographically distributed where the specific organizational structure promotes a decentralized business model. This paper describes the techniques and concepts of system architecture for distributed database management systems, followed by the presentation of implementation phases involved when dealing with the distributed queries across distributed systems. The goal of query optimization is to determine the most efficient way to execute a query in a distributed environment, by obtaining a lower system response time and also by minimizing the query execution time. For this, we will analyze the factors that influence the ways to execute a query and we will also review the available strategies to optimize the distributed query execution.

  15. Bivariate extreme value distributions

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  16. Predictable return distributions

    Pedersen, Thomas Quistgaard

    This paper provides detailed insights into predictability of the entire stock and bond return distribution through the use of quantile regression. This allows us to examine speci…c parts of the return distribution such as the tails or the center, and for a suf…ciently …ne grid of quantiles we can...... predictable as a function of economic state variables. The results are, however, very different for stocks and bonds. The state variables primarily predict only location shifts in the stock return distribution, while they also predict changes in higher-order moments in the bond return distribution. Out......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing an...

  17. Kernel Oriented Generator Distribution

    Bekker, A.; Arashi, M.

    2014-01-01

    Matrix variate beta (MVB) distributions are used in different fields of hypothesis testing, multivariate correlation analysis, zero regression, canonical correlation analysis and etc. In this approach a unified methodology is proposed to generate matrix variate distributions by combining the kernel of MVB distributions of different types with an unknown Borel measurable function of trace operator over matrix space, called generator component. The latter component is a principal element of the...

  18. Managing Distributed Software Projects

    Persson, John Stouby

    2010-01-01

    Increasingly, software projects are becoming geographically distributed, with limited face-toface interaction between participants. These projects face particular challenges that need careful managerial attention. This PhD study reports on how we can understand and support the management of distributed software projects, based on a literature study and a case study. The main emphasis of the literature study was on how to support the management of distributed software projects, but also contri...

  19. Leadership for Distributed Teams

    De Rooij, J.P.G.

    2009-01-01

    The aim of this dissertation was to study the little examined, yet important issue of leadership for distributed teams. Distributed teams are defined as: “teams of which members are geographically distributed and are therefore working predominantly via mediated communication means on an interdependent task and in realizing a joint goal” (adapted from Bell & Kozlowski, 2002 and Dubé & Paré, 2004). Chapter 1 first presents the outline of the dissertation. Next, several characteristics of distri...

  20. Energy of generalized distributions

    González-Dávila, J.C.

    2015-01-01

    We consider the energy of smooth generalized distributions and also of singular foliations on compact Riemannian manifolds for which the set of their singularities consists of a finite number of isolated points and of pairwise disjoint closed submanifolds. We derive a lower bound for the energy of all $q$-dimensional almost regular distributions, for each $q< \\dim M,$ and find several examples of foliations which minimize the energy functional over certain sets of smooth generalized distribut...

  1. Distributed generation systems model

    Barklund, C.R.

    1994-12-31

    A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.

  2. Distributed Energy Technology Laboratory

    Federal Laboratory Consortium — The Distributed Energy Technologies Laboratory (DETL) is an extension of the power electronics testing capabilities of the Photovoltaic System Evaluation Laboratory...

  3. Learning Poisson Binomial Distributions

    Daskalakis, Constantinos; Diakonikolas, Ilias; Servedio, Rocco A

    2015-01-01

    We consider a basic problem in unsupervised learning: learning an unknown Poisson binomial distribution. A Poisson binomial distribution (PBD) over TeX is the distribution of a sum of TeX independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by Poisson (Recherches sur la Probabilitè des jugements en matié criminelle et en matiére civile. Bachelier, Paris, 1837) and are a natural TeX -parameter generalizatio...

  4. Advanced air distribution

    Melikov, Arsen Krikor

    2011-01-01

    The aim of total volume air distribution (TVAD) involves achieving uniform temperature and velocity in the occupied zone and environment designed for an average occupant. The supply of large amounts of clean and cool air are needed to maintain temperature and pollution concentration at acceptable....... Ventilation in hospitals is essential to decrease the risk of airborne cross-infection. At present, mixing air distribution at a minimum of 12 ach is used in infection wards. Advanced air distribution has the potential to aid in achieving healthy, comfortable and productive indoor environments at levels...... higher than what can be achieved today with the commonly used total volume air distribution principles....

  5. Distributed Structure Searchable Toxicity

    U.S. Environmental Protection Agency — The Distributed Structure Searchable Toxicity (DSSTox) online resource provides high quality chemical structures and annotations in association with toxicity data....

  6. Electric distribution systems

    Sallam, A A

    2010-01-01

    "Electricity distribution is the penultimate stage in the delivery of electricity to end users. The only book that deals with the key topics of interest to distribution system engineers, Electric Distribution Systems presents a comprehensive treatment of the subject with an emphasis on both the practical and academic points of view. Reviewing traditional and cutting-edge topics, the text is useful to practicing engineers working with utility companies and industry, undergraduate graduate and students, and faculty members who wish to increase their skills in distribution system automation and monitoring."--

  7. Sorting a distribution theory

    Mahmoud, Hosam M

    2011-01-01

    A cutting-edge look at the emerging distributional theory of sorting Research on distributions associated with sorting algorithms has grown dramatically over the last few decades, spawning many exact and limiting distributions of complexity measures for many sorting algorithms. Yet much of this information has been scattered in disparate and highly specialized sources throughout the literature. In Sorting: A Distribution Theory, leading authority Hosam Mahmoud compiles, consolidates, and clarifies the large volume of available research, providing a much-needed, comprehensive treatment of the

  8. Building the repositories to serve

    The project to design and build the Superconducting Super Collider (SSC) Laboratory also includes the exciting opportunity to implement client/server information systems. Lab technologists were eager to take advantage of the cost savings inherent in the open systems and a distributed, client server environment and, at the same time, conscious of the need to provide secure repositories for sensitive data as well as a schedule sensitive acquisition strategy for mission critical software. During the first year of project activity, micro-based project management and business support systems were acquired and implemented to support a small study project of less than 400 people allocating contracts of less than $1 million. The transition to modern business systems capable of supporting more than 10,000 participants (world wide) who would be researching and developing the new technologies that would support the world's largest scientific instrument, a 42 Tevatron, superconducting, super collider became a mission critical event. This paper will present the SSC Laboratory's strategy to balance its commitment to open systems, structured query language (SQL) standards and its success with acquiring commercial off the shelf software to support immediate goals. Included will be an outline of the vital roles played by other labs (Livermore, CERN, Brookhaven, Fermi and others) and a discussion of future collaboration potentials to leverage the information activities of all Department of Energy funded labs

  9. Mobile Agent-Based Directed Diffusion in Wireless Sensor Networks

    Kwon Taekyoung

    2007-01-01

    Full Text Available In the environments where the source nodes are close to one another and generate a lot of sensory data traffic with redundancy, transmitting all sensory data by individual nodes not only wastes the scarce wireless bandwidth, but also consumes a lot of battery energy. Instead of each source node sending sensory data to its sink for aggregation (the so-called client/server computing, Qi et al. in 2003 proposed a mobile agent (MA-based distributed sensor network (MADSN for collaborative signal and information processing, which considerably reduces the sensory data traffic and query latency as well. However, MADSN is based on the assumption that the operation of mobile agent is only carried out within one hop in a clustering-based architecture. This paper considers MA in multihop environments and adopts directed diffusion (DD to dispatch MA. The gradient in DD gives a hint to efficiently forward the MA among target sensors. The mobile agent paradigm in combination with the DD framework is dubbed mobile agent-based directed diffusion (MADD. With appropriate parameters set, extensive simulation shows that MADD exhibits better performance than original DD (in the client/server paradigm in terms of packet delivery ratio, energy consumption, and end-to-end delivery latency.

  10. Federal Emergency Management Information System (FEMIS) Data Management Guide for FEMIS Version 1.4.6

    Angel, L.K.; Bower, J.C.; Burnett, R.A.; Downing, T.R.; Fangman, P.M.; Hoza, M.; Johnson, D.M.; Johnson, S.M.; Loveall, R.M.; Millard, W.D.; Schulze, S.A.; Wood, B.M.

    1999-06-29

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.

  11. Building the repositories to serve

    The project to design and build the Superconducting Super Collider (SSC) Laboratory also includes the exciting opportunity to implement client/server information systems. Lab technologists were eager to take advantage of the cost savings inherent in the open systems and a distributed, client server environment and, at the same time, conscious of the need to provide secure repositories for sensitive data as well as a schedule sensitive acquisition strategy for mission critical software. During the first year of project activity, micro-based project management and business support systems were acquired and implemented to support a small study project of less than 400 people allocating contracts of less than $1 million. The transition to modern business systems capable of supporting more than 10,000 participants (world wide) who would be researching and developing the new technologies that would support the world's largest scientific instrument, a 42 Tevatron, superconducting, super collider became a mission critical event. This paper will present the SSC Laboratory's strategy to balance our commitment to open systems, structured query language (SQL) standards and our success with acquiring commercial off the shelf software (COTS) to support our immediate goals. Included will be an outline of the vital roles played by other labs (Livermore, CERN, Brookhaven, Fermi and others) and a discussion of future collaboration potentials to leverage the information activities of all Department of Energy (DOE) funded labs

  12. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  13. Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6

    Arp, J.A.; Burnett, R.A.; Carter, R.J.; Downing, T.R.; Dunkle, J.R.; Fangman, P.M.; Gackle, P.P.; Homer, B.J.; Johnson, D.M.; Johnson, R.L.; Johnson, S.M.; Loveall, R.M.; Stephan, A.J.; Millard, W.D.; Wood, B.M.

    1999-06-29

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.

  14. Federal Emergency Management Information System (FEMIS) System Administration Guide for FEMIS Version 1.4.6

    Arp, J.A.; Bower, J.C.; Burnett, R.A.; Carter, R.J.; Downing, T.R.; Fangman, P.M.; Gerhardstein, L.H.; Homer, B.J.; Johnson, D.M.; Johnson, R.L.; Johnson, S.M.; Loveall, R.M.; Martin, T.J.; Millard, W.D.; Schulze, S.A.; Stoops, L.R.; Tzemos, S.; Wood, B.M.

    1999-06-29

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.

  15. Distribution system state estimation

    Wang, Haibin

    With the development of automation in distribution systems, distribution SCADA and many other automated meters have been installed on distribution systems. Also Distribution Management System (DMS) have been further developed and more sophisticated. It is possible and useful to apply state estimation techniques to distribution systems. However, distribution systems have many features that are different from the transmission systems. Thus, the state estimation technology used in the transmission systems can not be directly used in the distribution systems. This project's goal was to develop a state estimation algorithm suitable for distribution systems. Because of the limited number of real-time measurements in the distribution systems, the state estimator can not acquire enough real-time measurements for convergence, so pseudo-measurements are necessary for a distribution system state estimator. A load estimation procedure is proposed which can provide estimates of real-time customer load profiles, which can be treated as the pseudo-measurements for state estimator. The algorithm utilizes a newly installed AMR system to calculate more accurate load estimations. A branch-current-based three-phase state estimation algorithm is developed and tested. This method chooses the magnitude and phase angle of the branch current as the state variable, and thus makes the formulation of the Jacobian matrix less complicated. The algorithm decouples the three phases, which is computationally efficient. Additionally, the algorithm is less sensitive to the line parameters than the node-voltage-based algorithms. The algorithm has been tested on three IEEE radial test feeders, both the accuracy and the convergence speed. Due to economical constraints, the number of real-time measurements that can be installed on the distribution systems is limited. So it is important to decide what kinds of measurement devices to install and where to install them. Some rules of meter placement based

  16. Smart Distribution Systems

    Yazhou Jiang

    2016-04-01

    Full Text Available The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. A comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD, is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs is introduced. Future research in a smart distribution environment is proposed.

  17. Advanced Distribution Management System

    Avazov, Artur R.; Sobinova, Liubov A.

    2016-02-01

    This article describes the advisability of using advanced distribution management systems in the electricity distribution networks area and considers premises of implementing ADMS within the Smart Grid era. Also, it gives the big picture of ADMS and discusses the ADMS advantages and functionalities.

  18. Software distribution using xnetlib

    Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (US). Dept. of Computer Science]|[Oak Ridge National Lab., TN (US); Rowan, T.H. [Oak Ridge National Lab., TN (US); Wade, R.C. [Univ. of Tennessee, Knoxville, TN (US). Dept. of Computer Science

    1993-06-01

    Xnetlib is a new tool for software distribution. Whereas its predecessor netlib uses e-mail as the user interface to its large collection of public-domain mathematical software, xnetlib uses an X Window interface and socket-based communication. Xnetlib makes it easy to search through a large distributed collection of software and to retrieve requested software in seconds.

  19. Epicentral distribution in 2006

    CHEN Pei-shan

    2007-01-01

    @@ For showing the epicentral distribution in and near China as well as all over the world, two epicentral maps for the earthquakes occurred last year are published annually in the 6-th issue each year. Figures 1 and 2 represent the epicentral distributions in and near China and all over the world in 2006, respectively.

  20. Epicentral distribution in 2005

    CHEN Pei-shan

    2006-01-01

    @@ For showing the epicentral distribution in and near China as well as all over the world, two epicentral maps for the earthquakes occurred last year are published annually in the 6-th issue each year. Figures 1 and 2 represent the epicentral distributions in and near China and all over the world in 2005, respectively.

  1. Epicentral distribution in 2007

    CHEN Pei-shan

    2008-01-01

    @@ For showing the epicentral distribution in and near China as well as all over the world, two epicentral maps for the earthquakes occurred last year are published annually in the 6-th issue each year. Figures 1 and 2 represent the epicentral distributions in and near China and all over the World in 2007, Respectively.

  2. Epicentral distribution in 2004

    CHEN Pei-shan

    2005-01-01

    @@ For showing the epicentral distribution in and near China as well as all over the world, two epicentral maps for the earthquakes occurred last year are published annually in the 6-th issue each year. Figures 1 and 2 represent the epicentral distributions in and near China and all over the world in 2004, respectively.

  3. Groundwater and Distribution Workbook.

    Ekman, John E.

    Presented is a student manual designed for the Wisconsin Vocational, Technical and Adult Education Groundwater and Distribution Training Course. This program introduces waterworks operators-in-training to basic skills and knowledge required for the operation of a groundwater distribution waterworks facility. Arranged according to the general order…

  4. Cache Oblivious Distribution Sweeping

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  5. Evaluating Distributed Timing Constraints

    Kristensen, C.H.; Drejer, N.

    In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....

  6. Distribution of prime numbers

    Ouannas, Moussa

    2011-01-01

    In this paper I present the distribution of prime numbers which was treated in many researches by studying the function of Riemann; because it has a remarkable property; its non trivial zeros are prime numbers; but in this work I will show that we can find the distribution of prime numbers on remaining in natural numbers only.

  7. Metrics for Food Distribution.

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in food distribution, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  8. Distributed Language and Dialogism

    Steffensen, Sune Vork

    2015-01-01

    This article takes a starting point in Per Linell’s (2013) review article on the book Distributed Language (Cowley, 2011a) and other contributions to the field of ‘Distributed Language’, including Cowley et al. (2010) and Hodges et al. (2012). The Distributed Language approach is a naturalistic and...... anti-representational approach to language that builds on recent developments in the cognitive sciences. With a starting point in Linell’s discussion of the approach, the article aims to clarify four aspects of a distributed view of language vis-à-vis the tradition of Dialogism, as presented by Linell...... (2009, 2013). First, the article discusses a central principle in Distributed Language, “the principle of non-locality,” and Linell’s interpretation of it; more generally, this is a discussion of contrasting views on “the locus of language” and derived methodological issues. Second, the article...

  9. Managing Distributed Software Projects

    Persson, John Stouby

    Increasingly, software projects are becoming geographically distributed, with limited face-toface interaction between participants. These projects face particular challenges that need careful managerial attention. This PhD study reports on how we can understand and support the management of...... distributed software projects, based on a literature study and a case study. The main emphasis of the literature study was on how to support the management of distributed software projects, but also contributed to an understanding of these projects. The main emphasis of the case study was on how to understand...... the management of distributed software projects, but also contributed to supporting the management of these projects. The literature study integrates what we know about risks and risk-resolution techniques, into a framework for managing risks in distributed contexts. This framework was developed...

  10. Distributed programming using AGAPIA

    Ciprian I. Paduraru

    2014-01-01

    Full Text Available As distributed applications became more commonplace and more sophisticated, new programming languages and models for distributed programming were created. The main scope of most of these languages was to simplify the process of development by a providing a higher expressivity. This paper presents another programming language for distributed computing named AGAPIA. Its main purpose is to provide an increased expressiveness while keeping the performance close to a core programming language. To demonstrate its capabilities the paper shows the implementations of some well-known patterns specific to distribute programming along with a comparison to the corresponding MPI implementation. A complete application is presented by combining a few patterns. By taking advantage of the transparent communication model and high level statements and patterns intended to simplify the development process, the implementation of distributed programs become modular, easier to write, in clear and closer to the original solution formulation.

  11. Distributed Robotics Education

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept...... of a distributed educational system as a valuable tool for introducing students to interactive parallel and distributed processing programming as the foundation for distributed robotics and human-robot interaction development. This is done by providing an educational tool that enables problem representation...... to be changed, related to multirobot control and human-robot interaction control from virtual to physical representation. The proposed system is valuable for bringing a vast number of issues into education – such as parallel programming, distribution, communication protocols, master dependency, connectivity...

  12. BTOPMC/SCAU分布式流域水文模型原理和系统设计%Principles of BTOPMC/SCAU distributed watershed hydrological model with system design

    周买春; 肖红玉; 胡月明; 刘远

    2015-01-01

    respectively stored in relative database manage system and files. As a core of the system, models were consisted of modules of terrain analysis, runoff generation, flow concentration and basin application. The terrain module computed static characters of basin ground. The runoff generation and flow concentration modules computed dynamic hydrological processes by integrating meteorological inputs and basin ground characters. In order to improve modeling efficiency, an OpenMP (Open Multi-Processing) programming was used multiple cores of CPUs for parallel computation in the two modules. Based on an epoll mechanism and programmed in C/C++, the communication layer was designed for message passing among other layers and it supported simultaneous multi-users access. Depending on the user's intention it was passible after using some integration tools in the data illustration layer to extract inputs, outputs and some processing results from data layer and intuitively display them in tables or graphs. The user operation layer, which provided a concise GUI (Graphic User Interface), was programmed in Java, so it was able to run in different platforms such as Microsoft Windows, various Unix, Linux and so on. BTOPMC/SCAU was executed in a Client/Server environment where user operation layer and data illustration layer were deployed to clients and models and databases on server. The communication layer passed messages between Clients and Server. In this way, the system can concentrate on the huge burden of hydrological computation, and in the meantime allow large amount of data input and query from many users everywhere for the basin management. Two operation conditions were provided to run the system: calibration and simulation, and the calibration operation supported two methods: manual and automatic ways. In automatic calibration, a global optimization algorithm, SCE-UA (Shuffled Complex Evolution developed at University of Arizona), was used and seven kinds of objective functions ccould

  13. Beam distributions beyond RMS

    The beam is often represented only by its position (mean) and the width (rms = root mean squared) of its distribution. To achieve these beam parameters in a noisy condition with high backgrounds, a Gaussian distribution with offset (4 parmeters) is fitted to the measured beam distribution. This gives a very robust answer and is not very sensitive to background subtraction techniques. To get higher moments of the distribution, like skew or kurtosis, a fitting function with one or two more parameters is desired which would model the higher moments. In this paper we will concentrate on an Asymmetric Gaussian and a Super Gaussian function that will give something like the skew and the kurtosis of the distribution. This information is used to quantify special beam distribution. Some are unwanted like beam tails (skew) from transverse wakefields, higher order dispersive aberrations or potential well distortion in a damping ring. A negative kurtosis of a beam distribution describes a more rectangular, compact shape like with an over-compressed beam in z or a closed to double-homed energy distribution, while a positive kurtosis looks more like a ''Christmas tree'' and can quantify a beam mismatch after filamentation. Besides the advantages of the quantification, there are some distributions which need a further investigation like long flat tails which create background particles in a detector. In particle simulations on the other hand a simple rms number might grossly overestimate the effective size (e.g. for producing luminosity) due to a few particles which are far away from the core. This can reduce the practical gain of a big theoretical improvement in the beam size

  14. Electric power distribution handbook

    Short, Thomas Allen

    2014-01-01

    Of the ""big three"" components of electrical infrastructure, distribution typically gets the least attention. In fact, a thorough, up-to-date treatment of the subject hasn't been published in years, yet deregulation and technical changes have increased the need for better information. Filling this void, the Electric Power Distribution Handbook delivers comprehensive, cutting-edge coverage of the electrical aspects of power distribution systems. The first few chapters of this pragmatic guidebook focus on equipment-oriented information and applications such as choosing transformer connections,

  15. Electric power distribution handbook

    Short, Thomas Allen

    2003-01-01

    Of the ...big three... components of the electricity infrastructure, distribution typically gets the least attention, and no thorough, up-to-date treatment of the subject has been published in years. Filling that void, the Electric Power Distribution Handbook provides comprehensive information on the electrical aspects of power distribution systems. It is an unparalleled source for the background information, hard-to-find tables, graphs, methods, and statistics that power engineers need, and includes tips and solutions for problem solving and improving performance. In short, this handbook giv

  16. Distributed System Contract Monitoring

    D, Adrian Francalanza Ph; D, Gordon Pace Ph; 10.4204/EPTCS.68.4

    2011-01-01

    The use of behavioural contracts, to specify, regulate and verify systems, is particularly relevant to runtime monitoring of distributed systems. System distribution poses major challenges to contract monitoring, from monitoring-induced information leaks to computation load balancing, communication overheads and fault-tolerance. We present mDPi, a location-aware process calculus, for reasoning about monitoring of distributed systems. We define a family of Labelled Transition Systems for this calculus, which allow formal reasoning about different monitoring strategies at different levels of abstractions. We also illustrate the expressivity of the calculus by showing how contracts in a simple contract language can be synthesised into different mDPi monitors.

  17. Simulating distributed systems

    Newman, H B

    2001-01-01

    The simulation framework developed within the "Models of Networked Analysis at Regional Centers" (MONARC) project as a design and optimization tool for large scale distributed systems is presented. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing tasks and to offer a flexible and dynamic environment to evaluate the performance of a range of possible distributed computing architectures. A detailed simulation of a large system, the CMS High Level Trigger (HLT) production farm, is also presented. (3 refs).

  18. Nonforward Parton Distributions

    Radyushkin, A V

    1997-01-01

    Applications of perturbative QCD to deeply virtual Compton scattering and hard exclusive electroproduction processes require a generalization of usual parton distributions for the case when long-distance information is accumulated in nonforward matrix elements of quark and gluon light-cone operators. We describe two types of nonperturbative functions parametrizing such matrix elements: double distributions F(x,y;t) and nonforward distribution functions F_\\zeta (X;t), discuss their spectral properties, evolution equations which they satisfy, basic uses and general aspects of factorization for hard exclusive processes.

  19. Distributed analysis at LHCb

    The distributed analysis experience to date at LHCb has been positive: job success rates are high and wait times for high-priority jobs are low. LHCb users access the grid using the GANGA job-management package, while the LHCb virtual organization manages its resources using the DIRAC package. This clear division of labor has benefitted LHCb and its users greatly; it is a major reason why distributed analysis at LHCb has been so successful. The newly formed LHCb distributed analysis support team has also proved to be a success.

  20. Annular Flow Distribution test

    This report documents the Babcock and Wilcox (B ampersand W) Annular Flow Distribution testing for the Savannah River Laboratory (SRL). The objective of the Annular Flow Distribution Test Program is to characterize the flow distribution between annular coolant channels for the Mark-22 fuel assembly with the bottom fitting insert (BFI) in place. Flow rate measurements for each annular channel were obtained by establishing ''hydraulic similarity'' between an instrumented fuel assembly with the BFI removed and a ''reference'' fuel assembly with the BFI installed. Empirical correlations of annular flow rates were generated for a range of boundary conditions

  1. On the Conditional Distribution of the Multivariate $t$ Distribution

    Ding, Peng

    2016-01-01

    As alternatives to the normal distributions, $t$ distributions are widely applied in robust analysis for data with outliers or heavy tails. The properties of the multivariate $t$ distribution are well documented in Kotz and Nadarajah's book, which, however, states a wrong conclusion about the conditional distribution of the multivariate $t$ distribution. Previous literature has recognized that the conditional distribution of the multivariate $t$ distribution also follows the multivariate $t$ ...

  2. Sheaves of Schwartz distributions

    The theory of sheaves is a relevant mathematical language for describing the localization principle, known to be valid for the Schwartz distributions (generalized functions). After introducing some fundamentals of sheaves and the basic facts about distribution spaces, the distribution sheaf DΩ of topological C-vector spaces over an open set Ω in Rn is systematically studied. A sheaf DM of distributions on a C∞-manifold M is then introduced, following a definition of Hoermander's for its particular elements. Further, a general definition of sheaves on a manifold, that are locally isomorphic to (or, modelled on) a sheaf on Rn, in proposed. The sheaf properties of DM are studied and this sheaf is shown to be locally isomorphic to DΩ, as a sheaf of topological vector spaces. (author). 14 refs

  3. Deciding bisimilarities on distributions

    Eisentraut, Christian; Hermanns, Holger; Krämer, Julia; Turrini, Andrea; Zhang, Lijun

    Probabilistic automata (PA) are a prominent compositional concurrency model. As a way to justify property-preserving abstractions, in the last years, bisimulation relations over probability distributions have been proposed both in the strong and the weak setting. Different to the usual bisimulation...... relations, which are defined over states, an algorithmic treatment of these relations is inherently hard, as their carrier set is uncountable, even for finite PAs. The coarsest of these relation, weak distribution bisimulation, stands out from the others in that no equivalent state-based characterisation is...... known so far. This paper presents an equivalent state-based reformulation for weak distribution bisimulation, rendering it amenable for algorithmic treatment. Then, decision procedures for the probability distribution-based bisimulation relations are presented....

  4. Distributive and Collective affixes

    Trondhjem, Naja Blytmann

    2015-01-01

    , repetition, habitual, continual and distributive/collective situations. The phasal aspectual affixes are further divided into “inner” phasal aspectual affixes with a verb-modifying function and scope over the verb stem, and “outer” phasal aspectual affixes with a sentencemodifying function and scope over the...... sentence. Most of the quantitative aspectual affixes have a verb-modifying function, and amount to about 33 affixes. Among the quantitative aspectual affixes about eleven affixes contains distributional/collective meaning. The distributional/collective affixes indicate the plurality of the first and /or...... of the affixes have very small semantic differences. Several affixes seem to have more than one meaning – a concrete meaning and an aspectual meaning. In this paper I shall give examples on how to differentiate between the distributive/ collective aspectual affixes.  ...

  5. Ribbon Seal Distribution Map

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains GIS layers that depict the known spatial distributions (i.e., ranges) and reported breeding areas of ribbon seals (Histriophoca fasciata). It...

  6. Bearded Seal Distribution Map

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains GIS layers that depict the known spatial distributions (i.e., ranges) of the two subspecies of bearded seals (Erignathus barbatus). It was...

  7. Ringed Seal Distribution Map

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains GIS layers that depict the known spatial distributions (i.e., ranges) of the five subspecies of ringed seals (Phoca hispida). It was produced...

  8. Spotted Seal Distribution Map

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains GIS layers that depict the known spatial distributions (i.e., ranges) and reported breeding areas of spotted seals (Phoca largha). It was...

  9. Financing Distributed Generation

    This paper introduces the engineer who is undertaking distributed generation projects to a wide range of financing options. Distributed generation systems (such as internal combustion engines, small gas turbines, fuel cells and photovoltaics) all require an initial investment, which is recovered over time through revenues or savings. An understanding of the cost of capital and financing structures helps the engineer develop realistic expectations and not be offended by the common requirements of financing organizations. This paper discusses several mechanisms for financing distributed generation projects: appropriations; debt (commercial bank loan); mortgage; home equity loan; limited partnership; vendor financing; general obligation bond; revenue bond; lease; Energy Savings Performance Contract; utility programs; chauffage (end-use purchase); and grants. The paper also discusses financial strategies for businesses focusing on distributed generation: venture capital; informal investors (''business angels''); bank and debt financing; and the stock market

  10. ATLAS Distributed Computing Automation

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  11. Combustion of Fractal Distributions

    Sotolongo, Oscar; Lopez, Enrique

    1994-01-01

    The advantages of introducing a fractal viewpoint in the field of combustion is emphasized. It is shown that the condition for perfect combustion of a collection of drops is the self-similarity of the distribution.

  12. Fuzzy barrier distributions

    Heavy-ion collisions often produce a fusion barrier distribution with structures displaying a fingerprint of couplings to highly collective excitations [1]. Basically the same distribution can be obtained from large-angle quasi-elastic scattering, though here the role of the many weak direct-reaction channels is unclear. For 20Ne + 90Zr we have observed the barrier structures expected for the highly deformed neon projectile, but for 20Ne + 92Zr we find completely smooth distribution (see Fig.1). We find that transfer channels in these systems are of similar strength but single particle excitations are significantly stronger in the latter case. They apparently reduce the 'resolving power' of the quasi-elastic channel, what leads to smeared out, or 'fuzzy' barrier distribution. This is the first case when such a phenomenon has been observed.(author)

  13. Robust Distributed Online Prediction

    Dekel, Ofer; Shamir, Ohad; Xiao, Lin

    2010-01-01

    The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In \\cite{DMB}, we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.

  14. Navigating Distributed Services

    Beute, Berco

    2002-01-01

    , to a situation where they are distributedacross the Internet. The second trend is the shift from a virtual environment that solelyconsists of distributed documents to a virtual environment that consists of bothdistributed documents and distributed services. The third and final trend is theincreasing...... diversity of devices used to access information on the Internet.The focal point of the thesis is an initial exploration of the effects of the trends onusers as they navigate the virtual environment of distributed documents and services.To begin the thesis uses scenarios as a heuristic device to identify and...... analyse themain effects of the trends. This is followed by an exploration of theory of navigationInformation Spaces, which is in turn followed by an overview of theories, and the stateof the art in navigating distributed services. These explorations of both theory andpractice resulted in a large number of...

  15. Agile distributed software development

    Persson, John Stouby; Mathiassen, Lars; Aaen, Ivan

    2012-01-01

    While face-to-face interaction is fundamental in agile software development, distributed environments must rely extensively on mediated interactions. Practicing agile principles in distributed environments therefore poses particular control challenges related to balancing fixed vs. evolving quality...... requirements and people vs. process-based collaboration. To investigate these challenges, we conducted an in-depth case study of a successful agile distributed software project with participants from a Russian firm and a Danish firm. Applying Kirsch’s elements of control framework, we offer an analysis of how...... in conjunction with informal roles and relationships such as clan-like control inherent in agile development. Overall, the study demonstrates that, if appropriately applied, communication technologies can significantly support distributed, agile practices by allowing concurrent enactment of both formal...

  16. Distributed Quantum Programming

    D'Hondt, Ellie

    2010-01-01

    In this paper we explore the structure and applicability of the Distributed Measurement Calculus (DMC), an assembly language for distributed measurement-based quantum computations. We describe the formal language's syntax and semantics, both operational and denotational, and state several properties that are crucial to the practical usability of our language, such as equivalence of our semantics, as well as compositionality and context-freeness of DMC programs. We show how to put these properties to use by constructing a composite program that implements distributed controlled operations, in the knowledge that the semantics of this program does not change under the various composition operations. Our formal model is the basis of a quantum virtual machine construction for distributed quantum computations, which we elaborate upon in the latter part of this work. This virtual machine embodies the formal semantics of DMC such that programming execution no longer needs to be analysed by hand. Far from a literal tr...

  17. Electricity Distribution Effectiveness

    Waldemar Szpyra; Wiesław Nowak; Rafał Tarko

    2015-01-01

    This paper discusses the basic concepts of cost accounting in the power industry and selected ways of assessing the effectiveness of electricity distribution. The results of effectiveness analysis of MV/LV distribution transformer replacement are presented, and unit costs of energy transmission through various medium-voltage line types are compared. The calculation results confirm the viability of replacing transformers manufactured before 1975. Replacing transformers manufactured after...

  18. Managing Distributed Knowledge Systems

    Sørensen, Brian Vejrum; Gelbuda, Modestas

    2005-01-01

    The article argues that the growth of de novo knowledge-based organization depends on managing and coordinating increasingly growing and, therefore, distributed knowledge. Moreover, the growth in knowledge is often accompanied by an increasing organizational complexity, which is a result of....... This paper contributes to the research on organizations as distributed knowledge systems by addressing two weaknesses of the social practice literature. Firstly, it downplays the importance of formal structure and organizational design and intervention efforts by key organizational members. Secondly...

  19. Unimodality and genus distributions

    Wan, Liangxia

    2014-01-01

    New criteria are shown that certain combinations of finite unimodal polynomials are unimodal. %Given unimodal polynomials with explicit expressions and dependent recursion relations, we propose an approach to determine their modes. As applications, unimodality of several polynomial sequences satisfying dependent recurrence relations and their modes are provided. Then unimodality of genus distributions for some ladders and crosses can be determined. As special cases, that of genus distribution...

  20. Intelligent distributed computing

    Thampi, Sabu

    2015-01-01

    This book contains a selection of refereed and revised papers of the Intelligent Distributed Computing Track originally presented at the third International Symposium on Intelligent Informatics (ISI-2014), September 24-27, 2014, Delhi, India.  The papers selected for this Track cover several Distributed Computing and related topics including Peer-to-Peer Networks, Cloud Computing, Mobile Clouds, Wireless Sensor Networks, and their applications.

  1. Generic Distributed Simulation Architecture

    Booker, C.P.

    1999-05-14

    A Generic Distributed Simulation Architecture is described that allows a simulation to be automatically distributed over a heterogeneous network of computers and executed with very little human direction. A prototype Framework is presented that implements the elements of the Architecture and demonstrates the feasibility of the concepts. It provides a basis for a future, improved Framework that will support legacy models. Because the Framework is implemented in Java, it may be installed on almost any modern computer system.

  2. MRS parton distributions

    Martin, A.D.; Stirling, W.J. [Durham Univ. (United Kingdom). Dept. of Physics; Roberts, R.G.

    1993-11-01

    The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues - the behaviour of the distributions at small x and the flavour structure of the quark sea - are discussed in detail. A comparison with the new structure function data from the HERA storage ring is made, and the outlook for the future is discussed. (Author).

  3. Distributed operating systems

    Mullender, Sape J.

    1987-01-01

    In the past five years, distributed operating systems research has gone through a consolidation phase. On a large number of design issues there is now considerable consensus between different research groups. In this paper, an overview of recent research in distributed systems is given. In turn, the paper discusses overall system structure, protection issues, file system designs, problems and solutions for fault tolerance and a mechanism that is rapidly becoming very important for efficient d...

  4. Competition in electricity distribution

    The traditional view of electricity distribution is that it is a natural monopoly. Few authors have explored the question as to whether electricity distributors truly are natural monopolies or not, while observation of the current industrial practice tends to suggest that a 'market' for distribution activities does actually exist. This is a paradox for a natural monopoly. Our explanation is that monopoly characteristics well characterise the network infrastructure, but not the network operation service. (author)

  5. Logistic distribution management

    Knytlová, Michaela

    2013-01-01

    Diploma work "Logistics distribution management" consists of two main parts: the literature review and a practical part. The literature review mostly contains information about the overall logistics, business logistics and distribution. It also contains the possibilities of using marketing, business management, and macroeconomic data while implementing a strategic business logistic plan of how to enter the new market with the existing range of production. In the literature revi...

  6. Multivariate Bernoulli distribution

    Dai, Bin; Ding, Shilin; Wahba, Grace

    2012-01-01

    In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex cliq...

  7. ATLAS distributed analysis

    Adams, David; Branco, Miguel; Albrand, Solveig; Rybkine, G.; Orellana, F.; Liko, D.; Tan C.L.; Deng, W.; C. KANNAN; Harrison Karl; Fassi, Farida; Fulachier, J.; Chetan, N.; Haeberli, C.; Soroko, A.

    2004-01-01

    The ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of datasets, transformations and jobs. The high-level services are implemented using generic parts...

  8. Polygamy of distributed entanglement

    Buscemi F.; Gour G.; Kim J.S.

    2009-01-01

    While quantum entanglement is known to be monogamous (i.e. shared entanglement is restricted in multi-partite settings), here we show that distributed entanglement (or the potential for entanglement) is by nature polygamous. By establishing the concept of one-way unlocalizable entanglement (UE) and investigating its properties, we provide a polygamy inequality of distributed entanglement in tripartite quantum systems of arbitrary dimension. We also provide a polygamy inequality in multi-qubit...

  9. Distributed generation hits market

    The pace at which vendors are developing and marketing gas turbines and reciprocating engines for small-scale applications may signal the widespread growth of distributed generation. Loosely defined to refer to applications in which power generation equipment is located close to end users who have near-term power capacity needs, distributed generation encompasses a broad range of technologies and load requirements. Disagreement is inevitable, but many industry observers associate distributed generation with applications anywhere from 25 kW to 25 MW. Ten years ago, distributed generation users only represented about 2% of the world market. Today, that figure has increased to about 4 or 5%, and probably could settle in the 20% range within a 3-to-5-year period, according to Michael Jones, San Diego, Calif.-based Solar Turbines Inc. power generation marketing manager. The US Energy Information Administration predicts about 175 GW of generation capacity will be added domestically by 2010. If 20% comes from smaller plants, distributed generation could account for about 35 GW. Even with more competition, it's highly unlikely distributed generation will totally replace current market structures and central stations. Distributed generation may be best suited for making market inroads when and where central systems need upgrading, and should prove its worth when the system can't handle peak demands. Typical applications include small reciprocating engine generators at remote customer sites or larger gas turbines to boost the grid. Additional market opportunities include standby capacity, peak shaving, power quality, cogeneration and capacity rental for immediate demand requirements. Integration of distributed generation systems--using gas-fueled engines, gas-fired combustion engines and fuel cells--can upgrade power quality for customers and reduce operating costs for electric utilities

  10. Distributed Self Management for Distributed Security Systems

    Hilker, Michael

    2008-01-01

    Distributed system as e.g. artificial immune systems, complex adaptive systems, or multi-agent systems are widely used in Computer Science, e.g. for network security, optimisations, or simulations. In these systems, small entities move through the network and perform certain tasks. At some time, the entities move to another place and require therefore information where to move is most profitable. Common used systems do not provide any information or use a centralised approach where a center delegates the entities. This article discusses whether small information about the neighbours enhances the performance of the overall system or not. Therefore, two information-protocols are introduced and analysed. In addition, the protocols are implemented and tested using the artificial immune system SANA that protects a network against intrusions.

  11. The isotopic distribution conundrum.

    Valkenborg, Dirk; Mertens, Inge; Lemière, Filip; Witters, Erwin; Burzykowski, Tomasz

    2012-01-01

    Although access to high-resolution mass spectrometry (MS), especially in the field of biomolecular MS, is becoming readily available due to recent advances in MS technology, the accompanied information on isotopic distribution in high-resolution spectra is not used at its full potential, mainly because of lack of knowledge and/or awareness. In this review, we give an insight into the practical problems related to calculating the isotopic distribution for large biomolecules, and present an overview of methods for the calculation of the isotopic distribution. We discuss the key events that triggered the development of various algorithms and explain the rationale of how and why the various isotopic-distribution calculations were performed. The review is focused around the developmental stages as briefly outlined below, starting with the first observation of an isotopic distribution. The observations of Beynon in the field of organic MS that chlorine appeared in a mass spectrum as two variants with odds 3:1 lie at the basis of the first wave of algorithms for the calculation of the isotopic distribution, based on the atomic composition of a molecule. From here on, we explain why more complex biomolecules such as peptides exhibit a highly complex isotope pattern when assayed by MS, and we discuss how combinatorial difficulties complicate the calculation of the isotopic distribution on computers. For this purpose, we highlight three methods, which were introduced in the 1980s. These are the stepwise procedure introduced by Kubinyi, the polynomial expansion from Brownawell and Fillippo, and the multinomial expansion from Yergey. The next development was instigated by Rockwood, who suggested to decompose the isotopic distribution in terms of their nucleon count instead of the exact mass. In this respect, we could claim that the term "aggregated" isotopic distribution is more appropriate. Due to the simplification of the isotopic distribution to its aggregated counterpart

  12. Learning Poisson Binomial Distributions

    Daskalakis, Constantinos; Servedio, R

    2011-01-01

    We consider a basic problem in unsupervised learning: learning an unknown \\emph{Poisson Binomial Distribution} over $\\{0,1,...,n\\}$. A Poisson Binomial Distribution (PBD) is a sum $X = X_1 + ... + X_n$ of $n$ independent Bernoulli random variables which may have arbitrary expectations. We work in a framework where the learner is given access to independent draws from the distribution and must (with high probability) output a hypothesis distribution which has total variation distance at most $\\eps$ from the unknown target PBD. As our main result we give a highly efficient algorithm which learns to $\\eps$-accuracy using $\\tilde{O}(1/\\eps^3)$ samples independent of $n$. The running time of the algorithm is \\emph{quasilinear} in the size of its input data, i.e. $\\tilde{O}(\\log(n)/\\eps^3)$ bit-operations (observe that each draw from the distribution is a $\\log(n)$-bit string). This is nearly optimal since any algorithm must use $\\Omega(1/\\eps^2)$ samples. We also give positive and negative results for some extensi...

  13. Distributed replica dynamics

    Zhang, Liang; Chill, Samuel T.; Henkelman, Graeme

    2015-11-01

    A distributed replica dynamics (DRD) method is proposed to calculate rare-event molecular dynamics using distributed computational resources. Similar to Voter's parallel replica dynamics (PRD) method, the dynamics of independent replicas of the system are calculated on different computational clients. In DRD, each replica runs molecular dynamics from an initial state for a fixed simulation time and then reports information about the trajectory back to the server. A simulation clock on the server accumulates the simulation time of each replica until one reports a transition to a new state. Subsequent calculations are initiated from within this new state and the process is repeated to follow the state-to-state evolution of the system. DRD is designed to work with asynchronous and distributed computing resources in which the clients may not be able to communicate with each other. Additionally, clients can be added or removed from the simulation at any point in the calculation. Even with heterogeneous computing clients, we prove that the DRD method reproduces the correct probability distribution of escape times. We also show this correspondence numerically; molecular dynamics simulations of Al(100) adatom diffusion using PRD and DRD give consistent exponential distributions of escape times. Finally, we discuss guidelines for choosing the optimal number of replicas and replica trajectory length for the DRD method.

  14. An Extended Pareto Distribution

    Mohamad Mead

    2014-10-01

    Full Text Available For the first time, a new continuous distribution, called the generalized beta exponentiated Pareto type I (GBEP [McDonald exponentiated Pareto] distribution, is defined and investigated. The new distribution contains as special sub-models some well-known and not known distributions, such as the generalized beta Pareto (GBP [McDonald Pareto], the Kumaraswamy exponentiated Pareto (KEP, Kumaraswamy Pareto (KP, beta exponentiated Pareto (BEP, beta Pareto (BP, exponentiated Pareto (EP and Pareto, among several others. Various structural properties of the new distribution are derived, including explicit expressions for the moments, moment generating function, incomplete moments, quantile function, mean deviations and Rényi entropy. Lorenz, Bonferroni and Zenga curves are derived. The method of maximum likelihood is proposed for estimating the model parameters. We obtain the observed information matrix. The usefulness of the new model is illustrated by means of two real data sets. We hope that this generalization may attract wider applications in reliability, biology and lifetime data analysis.

  15. Distributed processor systems

    In recent years, there has been a growing tendency in high-energy physics and in other fields to solve computational problems by distributing tasks among the resources of inter-coupled processing devices and associated system elements. This trend has gained further momentum more recently with the increased availability of low-cost processors and with the development of the means of data distribution. In two lectures, the broad question of distributed computing systems is examined and the historical development of such systems reviewed. An attempt is made to examine the reasons for the existence of these systems and to discern the main trends for the future. The components of distributed systems are discussed in some detail and particular emphasis is placed on the importance of standards and conventions in certain key system components. The ideas and principles of distributed systems are discussed in general terms, but these are illustrated by a number of concrete examples drawn from the context of the high-energy physics environment. (Auth.)

  16. Moment Distributions of Phase Type

    Bladt, Mogens; Nielsen, Bo Friis

    In this paper we prove that the class of distributions on the positive reals with a rational Laplace transform, also known as matrix-exponential distributions, is closed under formation of moment distributions. In particular, the results are hence valid for the well known class of phase-type dist......In this paper we prove that the class of distributions on the positive reals with a rational Laplace transform, also known as matrix-exponential distributions, is closed under formation of moment distributions. In particular, the results are hence valid for the well known class of phase......-type distributions. We construct representations for moment distributions based on a general matrix-exponential distribution which turns out to be a generalization of the moment distributions based on exponential distributions. For moment distributions based on phase{type distributions we find an appropriate...

  17. On exchangeable multinomial distributions

    George, E. Olusegun; Cheon, Kyeongmi; Yuan, Yilian; Szabo, Aniko

    2016-01-01

    We derive an expression for the joint distribution of exchangeable multinomial random variables, which generalizes the multinomial distribution based on independent trials while retaining some of its important properties. Unlike de Finneti's representation theorem for a binary sequence, the exchangeable multinomial distribution derived here does not require that the finite set of random variables under consideration be a subset of an infinite sequence. Using expressions for higher moments and correlations, we show that the covariance matrix for exchangeable multinomial data has a different form from that usually assumed in the literature, and we analyse data from developmental toxicology studies. The proposed analyses have been implemented in R and are available on CRAN in the CorrBin package.

  18. Electricity Distribution Effectiveness

    Waldemar Szpyra

    2015-12-01

    Full Text Available This paper discusses the basic concepts of cost accounting in the power industry and selected ways of assessing the effectiveness of electricity distribution. The results of effectiveness analysis of MV/LV distribution transformer replacement are presented, and unit costs of energy transmission through various medium-voltage line types are compared. The calculation results confirm the viability of replacing transformers manufactured before 1975. Replacing transformers manufactured after 1975 – only to reduce energy losses – is not economically justified. Increasing use of a PAS type line for energy transmission in local distribution networks is reasonable. Cabling these networks under the current calculation rules of discounts for excessive power outages is not viable, even in areas particularly exposed to catastrophic wire icing.

  19. Industrial power distribution

    Fehr, Ralph

    2016-01-01

    In this fully updated version of Industrial Power Distribution, the author addresses key areas of electric power distribution from an end-user perspective for both electrical engineers, as well as students who are training for a career in the electrical power engineering field. Industrial Power Distribution, Second Edition, begins by describing how industrial facilities are supplied from utility sources, which is supported with background information on the components of AC power, voltage drop calculations, and the sizing of conductors and transformers. Important concepts and discussions are featured throughout the book including those for sequence networks, ladder logic, motor application, fault calculations, and transformer connections. The book concludes with an introduction to power quality, how it affects industrial power systems, and an expansion of the concept of power factor, including a distortion term made necessary by the existence of harmonic.

  20. Distributed System Contract Monitoring

    Adrian Francalanza Ph.D

    2011-09-01

    Full Text Available The use of behavioural contracts, to specify, regulate and verify systems, is particularly relevant to runtime monitoring of distributed systems. System distribution poses major challenges to contract monitoring, from monitoring-induced information leaks to computation load balancing, communication overheads and fault-tolerance. We present mDPi, a location-aware process calculus, for reasoning about monitoring of distributed systems. We define a family of Labelled Transition Systems for this calculus, which allow formal reasoning about different monitoring strategies at different levels of abstractions. We also illustrate the expressivity of the calculus by showing how contracts in a simple contract language can be synthesised into different mDPi monitors.

  1. Distributed Wind Market Applications

    Forsyth, T.; Baring-Gould, I.

    2007-11-01

    Distributed wind energy systems provide clean, renewable power for on-site use and help relieve pressure on the power grid while providing jobs and contributing to energy security for homes, farms, schools, factories, private and public facilities, distribution utilities, and remote locations. America pioneered small wind technology in the 1920s, and it is the only renewable energy industry segment that the United States still dominates in technology, manufacturing, and world market share. The series of analyses covered by this report were conducted to assess some of the most likely ways that advanced wind turbines could be utilized apart from large, central station power systems. Each chapter represents a final report on specific market segments written by leading experts in this field. As such, this document does not speak with one voice but rather a compendium of different perspectives, which are documented from a variety of people in the U.S. distributed wind field.

  2. ATLAS Distributed Computing

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  3. Nonequilibrium distributions in superconductors

    The nonequilibrium distribution functions of quasiparticles and phonons in superconductors are calculated for various cases. The conditions at which the nonequilibrium distributions exist are found. The dependences of the temperature and concentration of excitations on the pumping intensity, the sample thickness, and other parameters of the superconductor are calculated. In the current state the dependences of these quantities on the superfluid velocity and the current are investigated, and it is found that the dependence T(v/sub s/) has a minimum, and the current for v/sub s/>v/sup(1)/sub s/o becomes negative. It is also shown that in the nonequilibrium superconductor the state with v/sub s/*not =0 and J(v/sub s/*) =0 may exist. We have determined the nonequilibrium distribution function for a tunnel junction and investigated the V-A characteristic at some conditions for which an absolute negative resistance may exist

  4. A distribution network review

    Fairbairn, R.J.; Maunder, D.; Kenyon, P.

    1999-07-01

    This report summarises the findings of a study reviewing the distribution network in England, Scotland and Wales to evaluate its ability to accommodate more embedded generation from both fossil fuel and renewable energy sources. The background to the study is traced, and descriptions of the existing electricity supply system, the licence conditions relating to embedded generation, and the effects of the Review of Electricity Trading Arrangements are given. The ability of the UK distribution networks to accept embedded generation is examined, and technical benefits/drawbacks arising from embedded generation, and the potential for uptake of embedded generation technologies are considered. The distribution network capacity and the potential uptake of embedded generation are compared, and possible solutions to overcome obstacles are suggested. (UK)

  5. Computação de objetos distribuídos na era da internet

    Désiré Nguessan

    2003-01-01

    Full Text Available In this paper, we discuss recent trends in distributed objects and the Internet computing technologies. Both technologies converge to create a paradigm for distributed computing. We provide an overview of CORBA (Common Object Request Broker Architecture, emphasizing its open architecture for distributed application based on distributed objects and the IIOP (Internet Inter-ORB Protocol that improve the integration of the applications in heterogeneous environments. The CORBA protocol is emerging as the business application messaging standard for the Internet and deserves attention from Information Technologies (IT organizations. We conclude that CORBA, together with the Internet, constitute a perfect symbiotic relationship to build, maintain and extend client/server applications with critical mission.

  6. Distributed photovoltaic grid transformers

    Shertukde, Hemchandra Madhusudan

    2014-01-01

    The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is

  7. Liquidity, welfare and distribution

    Martín Gil Samuel

    2012-01-01

    Full Text Available This work presents a dynamic general equilibrium model where wealth distribution is endogenous. I provide channels of causality that suggest a complex relationship between financial markets and the real activity which breaks down the classical dichotomy. As a consequence, the Friedman rule does not hold. In terms of the current events taking place in the world economy, this paper provides a rationale to advert against the perils of an economy satiated with liquidity. Efficiency and distribution cannot thus be considered as separate attributes once we account for the interactions between financial markets and the economic performance.

  8. Decentralized Distributed Bayesian Estimation

    Dedecius, Kamil; Sečkárová, Vladimíra

    Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf

  9. Definition of distributed generation

    Today, power systems in many countries are restructured and deregulated and it is expected the small scale generators or Distributed Generations (DG), such as Micro turbines, PV, fuel cells, wind turbines and so on will play important role in power system. So it is necessary to be well defined and described. Generally, DGs are defined as electric power generators that are connected to distribution networks or on the customer side of networks. This paper presents a definition of DGs with considering some important aspects, and after that introduces some different type of DGs

  10. New parton distributions

    Martin, A.D.; Stirling, W.J. (Dept. of Physics, Univ. Durham (United Kingdom)); Roberts, R.G. (Rutherford Appleton Lab., Chilton (United Kingdom))

    1992-12-01

    Recent measurements of the structure functions for deep inelastic scattering made by the NMC and the CCFR collaboration provide new information on parton distribution, particularly in the 0.01<[chi]<0.1 interval. We describe (next-to-leading order) determinations of parton distributions which incorporate these new data together with other deep-inelastic and related data, and which allow the possibility of flavour dependence of the light quark sea (i.e. anti u[ne]anti d) as implied by the Gottfried sum rule. We discuss extrapolations to small [chi]. (orig.).

  11. The Signal Distribution System

    Belohrad, D; CERN. Geneva. AB Department

    2005-01-01

    For the purpose of LHC signal observation and high frequency signal distribution, the Signal Distribution System (SDS) was built. The SDS can contain up to 5 switching elements, where each element allows the user to switch between one of the maximum 8 bi-directional signals. The coaxial relays are used to switch the signals. Depending of the coaxial relay type used, the transfer bandwidth can go up to 18GHz. The SDS is controllable via TCP/IP, parallel port, or locally by rotary switch.

  12. Theory of distributions

    Georgiev, Svetlin G

    2015-01-01

    This book explains many fundamental ideas on the theory of distributions. The theory of partial differential equations is one of the synthetic branches of analysis that combines ideas and methods from different fields of mathematics, ranging from functional analysis and harmonic analysis to differential geometry and topology. This presents specific difficulties to those studying this field. This book, which consists of 10 chapters, is suitable for upper undergraduate/graduate students and mathematicians seeking an accessible introduction to some aspects of the theory of distributions. It can also be used for one-semester course.

  13. Distributed Control Diffusion

    Schultz, Ulrik Pagh

    2007-01-01

    . Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular....... The prototype relies on a simple virtual machine with a dedicated instruction set, allowing mobile programs to migrate between the modules that constitute a robot. Through a number of simulated experiments, we should how a single rule-based controller program implemented using distributed control diffusion can...

  14. High voltage distributed amplifier

    Willems, D.; Bahl, I.; Wirsing, K.

    1991-12-01

    A high-voltage distributed amplifier implemented in GaAs MMIC technology has demonstrated good circuit performance over at least two octave bandwidth. This technique allows for very broadband amplifier operation with good efficiency in satellite, active-aperture radar, and battery-powered systems. Also, by increasing the number of FETs, the amplifier can be designed to match different voltage rails. The circuit does require a small amount of additional chip size over conventional distributed amplifiers but does not require power dividers or additional matching networks. This circuit configuration should find great use in broadband power amplifier design.

  15. THERMAL DISTRIBUTION SYSTEM EXPERIMENT

    KRAJEWSKI,R.F.; ANDREWS,J.W.; WEI,G.

    1999-09-01

    A laboratory experiment has been conducted which tests for the effects of distribution system purging on system Delivery Effectiveness (DE) as defined in ASHRAE 152P. The experiment is described in its configuration, instrumentation, and data acquisition system. Data gathered in the experiment is given and discussed. The results show that purging of the distribution system alone does not offer any improvement of the system DE. Additional supporting tests were conducted regarding experimental simulations of buffer zones and bare pipe and are also discussed.

  16. Distributed User Interfaces

    Gallud, Jose A; Penichet, Victor M R

    2011-01-01

    The recent advances in display technologies and mobile devices is having an important effect on the way users interact with all kinds of devices (computers, mobile devices, laptops, tablets, and so on). These are opening up new possibilities for interaction, including the distribution of the UI (User Interface) amongst different devices, and implies that the UI can be split and composed, moved, copied or cloned among devices running the same or different operating systems. These new ways of manipulating the UI are considered under the emerging topic of Distributed User Interfaces (DUIs). DUIs

  17. FMCG companies specific distribution channels

    Ioana Barin

    2009-01-01

    Distribution includes all activities undertaken by the producer, alone or in cooperation, since the end of the final finished products or services until they are in possession of consumers. The distribution consists of the following major components: distribution channels or marketing channels, which together form a distribution network; logistics o rphysical distribution. In order to effective achieve, distribution of goods requires an amount of activities and operational processes related t...

  18. Factor Determining Income Distribution

    J. Tinbergen (Jan)

    1972-01-01

    textabstractSince the phrase income distribution covers a large number of different concepts, it is necessary to define these and to indicate the choice made in this article. Income for a given recipient may cover lists of items which are not always the same. Apart from popular misunderstandings abo

  19. Asynchronous Distributed Searchlight Scheduling

    Obermeyer, Karl J; Bullo, Francesco

    2011-01-01

    This paper develops and compares two simple asynchronous distributed searchlight scheduling algorithms for multiple robotic agents in nonconvex polygonal environments. A searchlight is a ray emitted by an agent which cannot penetrate the boundary of the environment. A point is detected by a searchlight if and only if the point is on the ray at some instant. Targets are points which can move continuously with unbounded speed. The objective of the proposed algorithms is for the agents to coordinate the slewing (rotation about a point) of their searchlights in a distributed manner, i.e., using only local sensing and limited communication, such that any target will necessarily be detected in finite time. The first algorithm we develop, called the DOWSS (Distributed One Way Sweep Strategy), is a distributed version of a known algorithm described originally in 1990 by Sugihara et al \\cite{KS-IS-MY:90}, but it can be very slow in clearing the entire environment because only one searchlight may slew at a time. In an ...

  20. Enabling distributed petascale science

    Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science

  1. Industrial power distribution

    Sorrells, M.A.

    1990-01-01

    This paper is a broad overview of industrial power distribution. Primary focus will be on selection of the various low voltage components to achieve the end product. Emphasis will be on the use of national standards to ensure a safe and well designed installation.

  2. A MULTIVARIATE WEIBULL DISTRIBUTION

    Cheng Lee

    2010-07-01

    Full Text Available A multivariate survival function of Weibull Distribution is developed by expanding the theorem by Lu and Bhattacharyya. From the survival function, the probability density function, the cumulative probability function, the determinant of the Jacobian Matrix, and the general moment are derived.

  3. Quantum statistical parton distributions

    Modified Fermi-Dirac functions for the fermionic partons and a Bose-Einstein expression for gluons allow us to successfully describe both polarized and unpolarized structure functions in terms of a small number of parameters. Definite predictions are made for q-bar distributions to be tested in forthcoming experiments. (author)

  4. Quantum statistical parton distributions

    Buccella, Franco [Dipartimento di Scienze Fisiche, Universita di Napoli Federico II, INFN, Sezione di Napoli (Italy)

    2002-10-11

    Modified Fermi-Dirac functions for the fermionic partons and a Bose-Einstein expression for gluons allow us to successfully describe both polarized and unpolarized structure functions in terms of a small number of parameters. Definite predictions are made for q-bar distributions to be tested in forthcoming experiments. (author)

  5. Parton distributions updated

    Martin, A.D.; Stirling, W.J. [Durham Univ. (United Kingdom). Dept. of Physics; Roberts, R.G.

    1992-11-01

    We refine our recent determination of parton distributions with the inclusion of the new published sets of precise muon and neutrino deep inelastic data. Deuteron screening effects are incorporated. The tt-bar cross section at the Fermi National Accelerator Laboratory (FNAL) pp-bar collider is calculated. (author).

  6. Power distribution arrangement

    2010-01-01

    An arrangement and a method for distributing power supplied by a power source to two or more of loads (e.g., electrical vehicular systems) is disclosed, where a representation of the power taken by a particular one of the loads from the source is measured. The measured representation of the amount...

  7. A Distributed Tier-1

    Fischer, Lars; Grønager, Michael; Kleist, Josva;

    2008-01-01

    organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring....

  8. Distributed analysis in ATLAS

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  9. Distributed analysis in ATLAS

    Legger, Federica; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We r...

  10. The proton's gluon distribution

    Donnachie, A.; Landshoff, P.V.(DAMTP, Cambridge University, United Kingdom)

    2002-01-01

    The gluon distribution is dominated by the hard pomeron at small $x$ and all $Q^2$, with no soft-pomeron contribution. This describes well not only the DGLAP evolution of the hard-pomeron part of $F_2(x,Q^2)$, but also charm photoproduction and electroproduction, and the longitudinal structure function, all calculated in leading-order pQCD.

  11. Distributed Parameter Modelling Applications

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... the steady state, distributed behaviour of a short-path evaporator....

  12. Characterizations of Exponentiated Distributions

    gholamhossein g hamedani

    2013-01-01

    Variuos characterizations of the class of exponentiated distributions are presented.  These characterizations are based on a simple relationship between two truncated moments and based on functions of the nth order statistic.  The results are applied to certain well-known members of this class.

  13. Multiagent distributed watershed management

    Giuliani, M.; Castelletti, A.; Amigoni, F.; Cai, X.

    2012-04-01

    Deregulation and democratization of water along with increasing environmental awareness are challenging integrated water resources planning and management worldwide. The traditional centralized approach to water management, as described in much of water resources literature, is often unfeasible in most of the modern social and institutional contexts. Thus it should be reconsidered from a more realistic and distributed perspective, in order to account for the presence of multiple and often independent Decision Makers (DMs) and many conflicting stakeholders. Game theory based approaches are often used to study these situations of conflict (Madani, 2010), but they are limited to a descriptive perspective. Multiagent systems (see Wooldridge, 2009), instead, seem to be a more suitable paradigm because they naturally allow to represent a set of self-interested agents (DMs and/or stakeholders) acting in a distributed decision process at the agent level, resulting in a promising compromise alternative between the ideal centralized solution and the actual uncoordinated practices. Casting a water management problem in a multiagent framework allows to exploit the techniques and methods that are already available in this field for solving distributed optimization problems. In particular, in Distributed Constraint Satisfaction Problems (DCSP, see Yokoo et al., 2000), each agent controls some variables according to his own utility function but has to satisfy inter-agent constraints; while in Distributed Constraint Optimization Problems (DCOP, see Modi et al., 2005), the problem is generalized by introducing a global objective function to be optimized that requires a coordination mechanism between the agents. In this work, we apply a DCSP-DCOP based approach to model a steady state hypothetical watershed management problem (Yang et al., 2009), involving several active human agents (i.e. agents who make decisions) and reactive ecological agents (i.e. agents representing

  14. A distribution management system

    Jaerventausta, P.; Verho, P.; Kaerenlampi, M.; Pitkaenen, M. [Tampere Univ. of Technology (Finland); Partanen, J. [Lappeenranta Univ. of Technology (Finland)

    1998-08-01

    The development of new distribution automation applications is considerably wide nowadays. One of the most interesting areas is the development of a distribution management system (DMS) as an expansion to the traditional SCADA system. At the power transmission level such a system is called an energy management system (EMS). The idea of these expansions is to provide supporting tools for control center operators in system analysis and operation planning. Nowadays the SCADA is the main computer system (and often the only) in the control center. However, the information displayed by the SCADA is often inadequate, and several tasks cannot be solved by a conventional SCADA system. A need for new computer applications in control center arises from the insufficiency of the SCADA and some other trends. The latter means that the overall importance of the distribution networks is increasing. The slowing down of load-growth has often made network reinforcements unprofitable. Thus the existing network must be operated more efficiently. At the same time larger distribution areas are for economical reasons being monitored at one control center and the size of the operation staff is decreasing. The quality of supply requirements are also becoming stricter. The needed data for new applications is mainly available in some existing systems. Thus the computer systems of utilities must be integrated. The main data source for the new applications in the control center are the AM/FM/GIS (i.e. the network database system), the SCADA, and the customer information system (CIS). The new functions can be embedded in some existing computer system. This means a strong dependency on the vendor of the existing system. An alternative strategy is to develop an independent system which is integrated with other computer systems using well-defined interfaces. The latter approach makes it possible to use the new applications in various computer environments, having only a weak dependency on the

  15. Summer Steelhead Distribution [ds341

    California Department of Resources — Summer Steelhead Distribution October 2009 Version This dataset depicts observation-based stream-level geographic distribution of anadromous summer-run steelhead...

  16. www.p2p.edu: Rip, Mix & Burn Your Education.

    Gillespie, Thom

    2001-01-01

    Discusses peer to peer technology which allows uploading files from one hard drive to another. Topics include the client/server model for education; the Napster client/server model; Gnutella; Freenet and other projects to allow the free exchange of information without censorship; bandwidth problems; copyright issues; metadata; and the United…

  17. Computer control system of the cooler-synchrotron TARN-II

    The client-server model enables us to develop the flexible control system such as a TARN-II computer control system. The system forms a single machine including a message bus to communicate between them. An auxiliary control path in the client-server model serves a high speed device control. The configuration and performance of that control system are described. (author)

  18. Distribution management system

    Verho, P.; Kaerenlampi, M.; Pitkaenen, M.; Jaerventausta, P.; Partanen, J.

    1997-12-31

    This report comprises a general description of the results obtained in the research projects `Information system applications of a distribution control center`, `Event analysis in primary substation`, and `Distribution management system` of the EDISON research program during the years of 1993 - 1997. The different domains of the project are presented in more detail in other reports. An operational state analysis of a distribution network has been made from the control center point of view and the functions which can not be solved by a conventional SCADA system are determined. The basis for new computer applications is shown to be integration of the computer systems. The main result of the work is a distribution management system (DMS), which is an autonomous system integrated to the existing information systems, SCADA and AM/FM/GIS. The system uses a large number of modelling and computation methods and provides an extensive group of advanced functions to support the distribution network monitoring, fault management, operations planning and optimization. The development platform of the system consists of a Visual C++ programming environment, Windows NT operating system and PC. During the development the DMS has been tested in a pilot utility and it is nowadays in practical use in several Finnish utilities. The use of a DMS improves the quality and economy of power supply in many ways; the outage times can, in particular, be reduced using the system. Based on the achieved experiences some parts of the DMS reached the commercialization phase, too. Initially the commercial products were developed by a software company, Versoft Oy. At present the research results are the basis of a worldwide software product supplied by ABB Transmit Co. (orig.) EDISON Research Programme. 28 refs.

  19. Modeling of magnitude distributions by the generalized truncated exponential distribution

    Raschke, Mathias

    2014-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cut-off exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: When two TEDs with equal parameters excepting the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region in TEDs of sub-regions with equal parameters excepting the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the above-mentioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cut-off points. This distribution model is fle...

  20. Frequency distribution of coliforms in water distribution systems.

    Christian, R R; Pipes, W O

    1983-01-01

    Nine small water distribution systems were sampled intensively to determine the patterns of dispersion of coliforms. The frequency distributions of confirmed coliform counts were compatible with either the negative-binomial or the lognormal distribution. They were not compatible with either the Poisson or Poisson-plus-added zeroes distribution. The implications of the use of the lognormal distributional model were further evaluated because of its previous use in water quality studies. The geo...

  1. Light Meson Distribution Amplitudes

    Arthur, R; Brommel, D; Donnellan, M A; Flynn, J M; Juttner, A; de Lima, H Pedroso; Rae, T D; Sachrajda, C T; Samways, B

    2010-01-01

    We calculated the first two moments of the light-cone distribution amplitudes for the pseudoscalar mesons ($\\pi$ and $K$) and the longitudinally polarised vector mesons ($\\rho$, $K^*$ and $\\phi$) as part of the UKQCD and RBC collaborations' $N_f=2+1$ domain-wall fermion phenomenology programme. These quantities were obtained with a good precision and, in particular, the expected effects of $SU(3)$-flavour symmetry breaking were observed. Operators were renormalised non-perturbatively and extrapolations to the physical point were made, guided by leading order chiral perturbation theory. The main results presented are for two volumes, $16^3\\times 32$ and $24^3\\times 64$, with a common lattice spacing. Preliminary results for a lattice with a finer lattice spacing, $32^3\\times64$, are discussed and a first look is taken at the use of twisted boundary conditions to extract distribution amplitudes.

  2. Agile & Distributed Project Management

    Pries-Heje, Jan; Pries-Heje, Lene

    2011-01-01

    Scrum has gained surprising momentum as an agile IS project management approach. An obvious question is why Scrum is so useful? To answer that question we carried out a longitudinal study of a distributed project using Scrum. We analyzed the data using coding and categorisation and three carefully...... selected theoretical frameworks. Our conclusion in this paper is that Scrum is so useful because it provides effective communication in the form of boundary objects and boundary spanners, it provides effective social integration by building up social team capital, and it provides much needed control and...... coordination mechanisms by allowing both local and global articulation of work in the project. That is why Scrum is especially useful for distributed IS project management and teamwork....

  3. Distribution load estimation (DLE)

    Seppaelae, A.; Lehtonen, M. [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  4. DIRAC distributed computing services

    DIRAC Project provides a general-purpose framework for building distributed computing systems. It is used now in several HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple tool like DIRAC for accessing grid and other types of distributed computing resources. However, small experiments cannot afford to install and maintain dedicated services. Therefore, several grid infrastructure projects are providing DIRAC services for their respective user communities. These services are used for user tutorials as well as to help porting the applications to the grid for a practical day-to-day work. The services are giving access typically to several grid infrastructures as well as to standalone computing clusters accessible by the target user communities. In the paper we will present the experience of running DIRAC services provided by the France-Grilles NGI and other national grid infrastructure projects.

  5. On Distribution Preserving Quantization

    Li, Minyue; Kleijn, W Bastiaan

    2011-01-01

    Upon compressing perceptually relevant signals, conventional quantization generally results in unnatural outcomes at low rates. We propose distribution preserving quantization (DPQ) to solve this problem. DPQ is a new quantization concept that confines the probability space of the reconstruction to be identical to that of the source. A distinctive feature of DPQ is that it facilitates a seamless transition between signal synthesis and quantization. A theoretical analysis of DPQ leads to a distribution preserving rate-distortion function (DP-RDF), which serves as a lower bound on the rate of any DPQ scheme, under a constraint on distortion. In general situations, the DP-RDF approaches the classic rate-distortion function for the same source and distortion measure, in the limit of an increasing rate. A practical DPQ scheme based on a multivariate transformation is also proposed. This scheme asymptotically achieves the DP-RDF for i.i.d. Gaussian sources and the mean squared error.

  6. Distributed multinomial regression

    Taddy, Matt

    2015-01-01

    This article introduces a model-based approach to distributed computing for multinomial logistic (softmax) regression. We treat counts for each response category as independent Poisson regressions via plug-in estimates for fixed effects shared across categories. The work is driven by the high-dimensional-response multinomial models that are used in analysis of a large number of random counts. Our motivating applications are in text analysis, where documents are tokenized and the token counts ...

  7. Contracts in distributed systems

    Massimo Bartoletti; Emilio Tuosto; Roberto Zunino

    2011-01-01

    We present a parametric calculus for contract-based computing in distributed systems. By abstracting from the actual contract language, our calculus generalises both the contracts-as-processes and contracts-as-formulae paradigms. The calculus features primitives for advertising contracts, for reaching agreements, and for querying the fulfilment of contracts. Coordination among principals happens via multi-party sessions, which are created once agreements are reached. We present two instances ...

  8. Liquidity, Welfare and Distribution

    Martín Gil Samuel

    2012-01-01

    This work presents a dynamic general equilibrium model where wealth distribution is endogenous. I provide channels of causality that suggest a complex relationship between financial markets and the real activity which breaks down the classical dichotomy. As a consequence, the Friedman rule does not hold. In terms of the current events taking place in the world economy, this paper provides a rationale to advert against the perils of an economy satiated with liquidity. Efficiency and dist...

  9. Optimal distributed dynamic advertising

    Marinelli, Carlo; Savin, Sergei

    2004-01-01

    We propose a novel approach to modeling advertising dynamics for a firm operating over distributed market domain based on controlled partial differential equations of diffusion type. Using our model, we consider a general type of finite-horizon profit maximization problem in a monopoly setting. By reformulating this profit maximization problem as an optimal control problem in infinite dimensions, we derive sufficient conditions for the existence of its optimal solutions under general profit f...

  10. Fission fragment angular distributions

    Recently a Letter appeared (Phys. Rev. Lett., 522, 414(1984)) claiming that the usual expression for describing the angula distribution of fission fragments from compound nuclear decay is not a necessarily valid limit of a more general expression. In this comment we wish to point out that the two expressions arise from distinctly different models, and that the new expression as used in the cited reference is internally inconsistent

  11. Distributed Authorization in Vanadium

    Taly, Ankur; Shankar, Asim

    2016-01-01

    In this tutorial, we present an authorization model for distributed systems that operate with limited internet connectivity. Reliable internet access remains a luxury for a majority of the world's population. Even for those who can afford it, a dependence on internet connectivity may lead to sub-optimal user experiences. With a focus on decentralized deployment, we present an authorization model that is suitable for scenarios where devices right next to each other (such as a sensor or a frien...

  12. Air Distribution in Rooms

    Nielsen, Peter V.

    The research on air distribution in rooms is often done as full-size investigations, scale-model investigations or by Computational Fluid Dynamics (CFD). New activities have taken place within all three areas and this paper draws comparisons between the different methods. The outcome of the l......EA sponsored research "Air Flow Pattern within Buildings" is used for comparisons in some parts of the paper because various types of experiments and many countries are involved....

  13. Learning transformed product distributions

    Daskalakis, Constantinos; Diakonikolas, Ilias; Servedio, Rocco A.

    2011-01-01

    We consider the problem of learning an unknown product distribution $X$ over $\\{0,1\\}^n$ using samples $f(X)$ where $f$ is a \\emph{known} transformation function. Each choice of a transformation function $f$ specifies a learning problem in this framework. Information-theoretic arguments show that for every transformation function $f$ the corresponding learning problem can be solved to accuracy $\\eps$, using $\\tilde{O}(n/\\eps^2)$ examples, by a generic algorithm whose running time may be expon...

  14. Electric power distribution reliability

    Brown, Richard E

    2002-01-01

    Balancing theory, practical knowledge, and real-world applications, this reference consolidates all pertinent topics related to power distribution reliability into one comprehensive volume. Exploring pressing issues in creating and analyzing reliability models, the author highlights the most effective techniques to achieve maximum performance at lowest cost. With over 300 tables, figures, and equations, the book discusses service interruptions caused by equipment malfunction, animals, trees, severe weather, natural disasters, and human error and evaluates strategies to improve reliability and

  15. Quantum Message Distribution

    LUO Ming-Xing; CHEN Xiu-Bo; DENG Yun; Yang Yi-Xian

    2013-01-01

    The semiquantum techniques have been explored recently to bridge the classical communications and the quantum communications.In this paper,we present one scheme to distribute the messages from one quantum participate to one weak quantum participate who can only measure the quantum states.It is proved to be robust by combining the classical coding encryption,quantum coding and other quantum techniques.

  16. The generalized distributive law

    Aji, Srinivas M.; McEliece, Robert J.

    2000-01-01

    We discuss a general message passing algorithm, which we call the generalized distributive law (GDL). The GDL is a synthesis of the work of many authors in information theory, digital communications, signal processing, statistics, and artificial intelligence. It includes as special cases the Baum-Welch algorithm, the fast Fourier transform (FFT) on any finite Abelian group, the Gallager-Tanner-Wiberg decoding algorithm, Viterbi's algorithm, the BCJR algorithm, Pearl's “belief propagation” alg...

  17. Distributed Computing Economics

    Gray, Jim

    2004-01-01

    Computing economics are changing. Today there is rough price parity between (1) one database access, (2) ten bytes of network traffic, (3) 100,000 instructions, (4) 10 bytes of disk storage, and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic.

  18. Protection in Distributed Generation

    Comech, M.Paz; Garcia-Gracia, Miguel; Borroy, Samuel; Villen, M.Teresa

    2010-01-01

    The expectable great penetration of dispersed generators in distribution systems can lead to conflicts with the current protection schemes, since they were designed to work in a different scenario and under different conditions. Most of the problems described in this chapter would be solved with the implementation of communication schemes and providing the power system with some "intelligence". In this line, the standard IEC61850 and new developments like Smart Grids are expected to bring new...

  19. Distributed Priority Synthesis

    Harald Ruess

    2012-11-01

    Full Text Available Given a set of interacting components with non-deterministic variable update and given safety requirements, the goal of priority synthesis is to restrict, by means of priorities, the set of possible interactions in such a way as to guarantee the given safety conditions for all possible runs. In distributed priority synthesis we are interested in obtaining local sets of priorities, which are deployed in terms of local component controllers sharing intended next moves between components in local neighborhoods only. These possible communication paths between local controllers are specified by means of a communication architecture. We formally define the problem of distributed priority synthesis in terms of a multi-player safety game between players for (angelically selecting the next transition of the components and an environment for (demonically updating uncontrollable variables. We analyze the complexity of the problem, and propose several optimizations including a solution-space exploration based on a diagnosis method using a nested extension of the usual attractor computation in games together with a reduction to corresponding SAT problems. When diagnosis fails, the method proposes potential candidates to guide the exploration. These optimized algorithms for solving distributed priority synthesis problems have been integrated into the VissBIP framework. An experimental validation of this implementation is performed using a range of case studies including scheduling in multicore processors and modular robotics.

  20. A distribution management system

    Verho, P.; Jaerventausta, P.; Kaerenlampi, M.; Paulasaari, H. [Tampere Univ. of Technology (Finland); Partanen, J. [Lappeenranta Univ. of Technology (Finland)

    1996-12-31

    The development of new distribution automation applications is considerably wide nowadays. One of the most interesting areas is the development of a distribution management system (DMS) as an expansion of the traditional SCADA system. At the power transmission level such a system is called an energy management system (EMS). The idea of these expansions is to provide supporting tools for control center operators in system analysis and operation planning. The needed data for new applications is mainly available in some existing systems. Thus the computer systems of utilities must be integrated. The main data source for the new applications in the control center are the AM/FM/GIS (i.e. the network database system), the SCADA, and the customer information system (CIS). The new functions can be embedded in some existing computer system. This means a strong dependency on the vendor of the existing system. An alternative strategy is to develop an independent system which is integrated with other computer systems using well-defined interfaces. The latter approach makes it possible to use the new applications in various computer environments, having only a weak dependency on the vendors of the other systems. In the research project this alternative is preferred and used in developing an independent distribution management system

  1. Multivariate Matrix-Exponential Distributions

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    In this article we consider the distributions of non-negative random vectors with a joint rational Laplace transform, i.e., a fraction between two multi-dimensional polynomials. These distributions are in the univariate case known as matrix-exponential distributions, since their densities can be...... written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem for the...

  2. GASIFICATION FOR DISTRIBUTED GENERATION

    Ronald C. Timpe; Michael D. Mann; Darren D. Schmidt

    2000-05-01

    A recent emphasis in gasification technology development has been directed toward reduced-scale gasifier systems for distributed generation at remote sites. The domestic distributed power generation market over the next decade is expected to be 5-6 gigawatts per year. The global increase is expected at 20 gigawatts over the next decade. The economics of gasification for distributed power generation are significantly improved when fuel transport is minimized. Until recently, gasification technology has been synonymous with coal conversion. Presently, however, interest centers on providing clean-burning fuel to remote sites that are not necessarily near coal supplies but have sufficient alternative carbonaceous material to feed a small gasifier. Gasifiers up to 50 MW are of current interest, with emphasis on those of 5-MW generating capacity. Internal combustion engines offer a more robust system for utilizing the fuel gas, while fuel cells and microturbines offer higher electric conversion efficiencies. The initial focus of this multiyear effort was on internal combustion engines and microturbines as more realistic near-term options for distributed generation. In this project, we studied emerging gasification technologies that can provide gas from regionally available feedstock as fuel to power generators under 30 MW in a distributed generation setting. Larger-scale gasification, primarily coal-fed, has been used commercially for more than 50 years to produce clean synthesis gas for the refining, chemical, and power industries. Commercial-scale gasification activities are under way at 113 sites in 22 countries in North and South America, Europe, Asia, Africa, and Australia, according to the Gasification Technologies Council. Gasification studies were carried out on alfalfa, black liquor (a high-sodium waste from the pulp industry), cow manure, and willow on the laboratory scale and on alfalfa, black liquor, and willow on the bench scale. Initial parametric tests

  3. THE DISTRIBUTIONAL DIMENSION OF FRACTALS

    2007-01-01

    In the book [1] H.Triebel introduces the distributional dimension of fractals in and distributional dimension, respectively. Thus we might say that the distributional dimension is an analytical definition for Hausdorff dimension. Therefore we can study Hausdorff dimension through the distributional dimension analytically.By discussing the distributional dimension, this paper intends to set up a criterion for estimating the upper and lower bounds of Hausdorff dimension analytically. Examples illustrating the criterion are included in the end.

  4. A new Markov Binomial distribution.

    Omey, Edward; Minkova, Leda D.

    2011-01-01

    In this paper, we introduce a two state homogeneous Markov chain and define a geometric distribution related to this Markov chain. We define also the negative binomial distribution similar to the classical case and call it NB related to interrupted Markov chain. The new binomial distribution is related to the interrupted Markov chain. Some characterization properties of the Geometric distributions are given. Recursion formulas and probability mass functions for the NB distribution and the new...

  5. Optimizing queries in distributed systems

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  6. Density Distribution Sunflower Plots

    Dupont, William D.; W. Dale Plummer Jr.

    2003-01-01

    Density distribution sunflower plots are used to display high-density bivariate data. They are useful for data where a conventional scatter plot is difficult to read due to overstriking of the plot symbol. The x-y plane is subdivided into a lattice of regular hexagonal bins of width w specified by the user. The user also specifies the values of l, d, and k that affect the plot as follows. Individual observations are plotted when there are less than l observations per bin as in a conventio...

  7. Distributed multiple description coding

    Bai, Huihui; Zhao, Yao

    2011-01-01

    This book examines distributed video coding (DVC) and multiple description coding (MDC), two novel techniques designed to address the problems of conventional image and video compression coding. Covering all fundamental concepts and core technologies, the chapters can also be read as independent and self-sufficient, describing each methodology in sufficient detail to enable readers to repeat the corresponding experiments easily. Topics and features: provides a broad overview of DVC and MDC, from the basic principles to the latest research; covers sub-sampling based MDC, quantization based MDC,

  8. Distributed actuator deformable mirror

    Bonora, Stefano

    2010-01-01

    In this paper we present a Deformable Mirror (DM) based on the continuous voltage distribution over a resistive layer. This DM can correct the low order aberrations (defocus, astigmatism, coma and spherical aberration) using three electrodes with nine contacts leading to an ideal device for sensorless applications. We present a mathematical description of the mirror, a comparison between the simulations and the experimental results. In order to demonstrate the effectiveness of the device we compared its performance with the one of a multiactuator DM of similar properties in the correction of an aberration statistics. At the end of the paper an example of sensorless correction is shown.

  9. On Distributed Embedded Systems

    Arvindra Sehmi

    2013-01-01

    Full Text Available Thinking of distributed embedded systems (DES—let alone the more general area of embedded computing—as a unified topic is difficult. Nevertheless, it is a vastly important topic and potentially represents a revolution in information technology (IT. DES is driven by the increasing capabilities and ever-declining costs of computing and communications devices, resulting in networked systems of embedded computers whose functional components are nearly invisible to end users. Systems have the potential to alter radically the way in which people interact with their environment by linking a range of devices and sensors that will allow information to be collected, shared, and processed in unprecedented ways.

  10. DEM - distribution energy management

    Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)] [and others

    1998-08-01

    The electricity market was de-regulated in Finland at the end of 1995 and the customers can now freely choose their power suppliers. The national grid and local distribution network operators are now separated from the energy business. The network operators transmit the electric power to the customers on equal terms regardless from whom the power is purchased. The Finnish national grid is owned by one company Finnish Power Grid PLC (Fingrid). The major shareholders of Fingrid are the state of Finland, two major power companies and institutional investors. In addition there are about 100 local distribution utilities operating the local 110 kV, 20 kV and 0.4 kV networks. The distribution utilities are mostly owned by the municipalities and towns. In each network one energy supplier is always responsible for the hourly energy balance in the network (a `host`) and it also has the obligation to provide public energy prices accessible to any customer in the network`s area. The Finnish regulating authorities nominate such a supplier who has a dominant market share in the network`s area as the supplier responsible for the network`s energy balance. A regulating authority, called the Electricity Market Centre, ensures that the market is operating properly. The transmission prices and public energy prices are under the Electricity Market Centre`s control. For domestic and other small customers the cost of hourly metering (ca. 1000 US$) would be prohibitive and therefore the use of conventional energy metering and load models is under consideration by the authorities. Small customer trade with the load models (instead of the hourly energy recording) is scheduled to start in the first half of 1998. In this presentation, the problems of energy management from the standpoint of the energy trading and distributing companies in the new situation are first discussed. The topics covered are: the hourly load data management, the forecasting and estimation of hourly energy demands

  11. Approximation of distributed delays

    Lu, Hao; Eberard, Damien; Simon, Jean-Pierre

    2010-01-01

    We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.

  12. Distribution load estimation - DLE

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  13. Distributed Parameter Modelling Applications

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the...

  14. Contracts in distributed systems

    Bartoletti, Massimo; Zunino, Roberto; 10.4204/EPTCS.59.11

    2011-01-01

    We present a parametric calculus for contract-based computing in distributed systems. By abstracting from the actual contract language, our calculus generalises both the contracts-as-processes and contracts-as-formulae paradigms. The calculus features primitives for advertising contracts, for reaching agreements, and for querying the fulfilment of contracts. Coordination among principals happens via multi-party sessions, which are created once agreements are reached. We present two instances of our calculus, by modelling contracts as (i) processes in a variant of CCS, and (ii) as formulae in a logic. With the help of a few examples, we discuss the primitives of our calculus, as well as some possible variants.

  15. Bilateral matrix-exponential distributions

    Bladt, Mogens; Esparza, Luz Judith R; Nielsen, Bo Friis

    2012-01-01

    In this article we define the classes of bilateral and multivariate bilateral matrix-exponential distributions. These distributions have support on the entire real space and have rational moment-generating functions. These distributions extend the class of bilateral phasetype distributions of [1......] and the class of multivariate matrix-exponential distributions of [9]. We prove a characterization theorem stating that a random variable has a bilateral multivariate distribution if and only if all linear combinations of the coordinates have a univariate bilateral matrix-exponential distribution. As...... an application we demonstrate that certain multivariate disions, which are governed by the underlying Markov jump process generating a phasetype distribution, have a bilateral matrix-exponential distribution at the time of absorption, see also [4]....

  16. FMCG companies specific distribution channels

    Ioana Barin

    2009-12-01

    Full Text Available Distribution includes all activities undertaken by the producer, alone or in cooperation, since the end of the final finished products or services until they are in possession of consumers. The distribution consists of the following major components: distribution channels or marketing channels, which together form a distribution network; logistics o rphysical distribution. In order to effective achieve, distribution of goods requires an amount of activities and operational processes related to transit of goods from producer to consumer, the best conditions, using existing distribution channels and logistics system. One of the essential functions of a distribution is performing acts of sale, through which, with the actual movement of goods, their change of ownership takes place, that the successive transfer of ownership from producer to consumer. This is an itinerary in the economic cycle of goods, called the distribution channel.

  17. SYVAC3 parameter distribution package

    SYVAC3 (Systems Variability Analysis Code, generation 3) is a computer program that implements a method called systems variability analysis to analyze the behaviour of a system in the presence of uncertainty. This method is based on simulating the system many times to determine the variation in behaviour it can exhibit. SYVAC3 specializes in systems representing the transport of contaminants, and has several features to simplify the modelling of such systems. It provides a general tool for estimating environmental impacts from the dispersal of contaminants. This report describes a software object type (a generalization of a data type) called Parameter Distribution. This object type is used in SYVAC3, and can also be used independently. Parameter Distribution has the following subtypes: beta distribution; binomial distribution; constant distribution; lognormal distribution; loguniform distribution; normal distribution; piecewise uniform distribution; Triangular distribution; and uniform distribution. Some of these distributions can be altered by correlating two parameter distribution objects. This report provides complete specifications for parameter distributions, and also explains how to use them. It should meet the needs of casual users, reviewers, and programmers who wish to add their own subtypes. (author). 30 refs., 75 tabs., 56 figs

  18. Coping with distributed computing

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given

  19. Testing reveals proppant distribution

    Crump, J.B. (Halliburton Services, Duncan, OK (US)); Ekstrand, B.B. (Halliburton Services, Houston, TX (US)); Almond, S.W. (Halliburton Services, Bakersfield, CA (US))

    1988-10-31

    Sand distribution tests, undertaken to answer inquiries from a producing company, have shown that proppant placed during a hydraulic fracture treatment is evenly distributed into the perforated interval. Therefore, for planning purposes, a good assumption is that all perforations will pass essentially equal volumes of proppant, provided perforation quality is uniform, perforations are open, and bottom hole treating pressure is constant across the interval. Under simulated conditions, a fluid viscosity of 30 cp (511 sec/sup -1/) allowed 20/40 sand to ''turn to corner'' and pass through perforations with minimal stratification. This finding refutes the theory held by some that the bottom perforation is ''slugged'' with heavier concentration of sand than the upper perforations, and the theory's logical extension that after the bottom perforation is filled, it plugs and the perforation just above becomes the next bottom until the entire perforated interval is screened out. Tests described in this article were part of a program implemented to analyze causes of screen outs encountered in fracturing operations.

  20. Distributed System Design Checklist

    Hall, Brendan; Driscoll, Kevin

    2014-01-01

    This report describes a design checklist targeted to fault-tolerant distributed electronic systems. Many of the questions and discussions in this checklist may be generally applicable to the development of any safety-critical system. However, the primary focus of this report covers the issues relating to distributed electronic system design. The questions that comprise this design checklist were created with the intent to stimulate system designers' thought processes in a way that hopefully helps them to establish a broader perspective from which they can assess the system's dependability and fault-tolerance mechanisms. While best effort was expended to make this checklist as comprehensive as possible, it is not (and cannot be) complete. Instead, we expect that this list of questions and the associated rationale for the questions will continue to evolve as lessons are learned and further knowledge is established. In this regard, it is our intent to post the questions of this checklist on a suitable public web-forum, such as the NASA DASHLink AFCS repository. From there, we hope that it can be updated, extended, and maintained after our initial research has been completed.

  1. Atlas Distributed Analysis Tools

    de La Hoz, Santiago Gonzalez; Ruiz, Luis March; Liko, Dietrich

    2008-06-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  2. ATLAS Distributed Analysis Tools

    Gonzalez de la Hoz, Santiago; Liko, Dietrich

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting a...

  3. Bounding species distribution models

    Thomas J. STOHLGREN; Catherine S. JARNEVICH; Wayne E. ESAIAS; Jeffrey T. MORISETTE

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern.Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development,yet there is no recommended best practice for “clamping” model extrapolations.We relied on two commonly used modeling approaches:classification and regression tree (CART) and maximum entropy (Maxent) models,and we tested a simple alteration of the model extrapolations,bounding extrapolations to the maximum and minimum values of primary environmental predictors,to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States.Findings suggest that multiple models of bounding,and the most conservative bounding of species distribution models,like those presented here,should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5):642-647,2011].

  4. Efficient Distributed Medium Access

    Shah, Devavrat; Tetali, Prasad

    2011-01-01

    Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supp...

  5. Optimizing electrical distribution systems

    Electrical utility distribution systems are in the middle of an unprecedented technological revolution in planning, design, maintenance and operation. The prime movers of the revolution are the major economic shifts that affect decision making. The major economic influence on the revolution is the cost of losses (technical and nontechnical). The vehicle of the revolution is the computer, which enables decision makers to examine alternatives in greater depth and detail than their predecessors could. The more important elements of the technological revolution are: system planning, computers, load forecasting, analytical systems (primary systems, transformers and secondary systems), system losses and coming technology. The paper is directed towards the rather unique problems encountered by engineers of utilities in developing countries - problems that are being solved through high technology, such as the recent World Bank-financed engineering computer system for Sri Lanka. This system includes a DEC computer, digitizer, plotter and engineering software to model the distribution system via a digitizer, analyse the system and plot single-line diagrams. (author). 1 ref., 4 tabs., 6 figs

  6. Power distribution measuring device

    The present invention concerns a device for measuring power distribution of neutrons in a nuclear reactor. That is, a gamma thermometer used so far has drawbacks of slow time response and low sensitivity although it is not always necessary to use a movable incore detector for calibration. However, the device of the present invention compensates the drawback by incorporating a gamma thermometer and an another incore detector of a different type in an identical detector assembly. The gamma thermometer is calibrated by an electric heater. With such a constitution, the sensitivity calibration of the detector of different type incorporated in the identical detector assembly can be conducted without relying on a movable detector when the reactor is stable. Further, if the detector of the different type having rapid response, such as a fission ionization chamber or a self-powered type detector is used as a detector, a reactor core power distribution monitoring system of rapid time response can be attained. (I.S.)

  7. Unintegrated parton distributions

    Kimber, M

    2001-01-01

    We develop the theory of parton distributions f sub a (x, k sub t sup 2 , mu sup 2), unintegrated with respect to transverse momentum k sub t , from a phenomenological standpoint. In particular, we demonstrate a convenient approximation in which the unintegrated functions are obtained by explicitly performing the last step of parton evolution in perturbative QCD, with single-scale functions a(x,Q sup 2) as input. Results are presented in the context of DGLAP and combined BFKL-DGLAP evolution, but with angular ordering imposed in the last step of the evolution. We illustrate the application of these unintegrated distributions to predict cross sections for physical processes at lepton-hadron and hadron-hadron colliders. The use of partons with incoming transverse momentum, based on k sub t -factorisation, is intended to replace phenomenological 'smearing' in the perturbative region k sub t > k sub 0 (k sub 0 approx = 1 GeV), and enables the full kinematics of a process to be included even at leading order. We a...

  8. The software environment of RODOS

    The Software Environment of RODOS provides tools for processing and managing a large variety of different types of information, including those which are categorized in terms of meteorology, radiology, economy, emergency actions and countermeasures, rules, preferences, facts, maps, statistics, catalogues, models and methods. The main tasks of the Operating Subsystem OSY, which is based on the Client-Server Model, are the control of system operation, data management, and the exchange of information among various modules as well as the interaction with users in distributed computer systems. The paper describes the software environment of RODOS, in particular, the individual modules of its Operating Subsystem OSY, its distributed database, the geographical information system RoGIS, the on-line connections to radiological and meteorological networks and the software environment for the integration of external programs into the RODOS system

  9. Parametrizing the exoplanet eccentricity distribution with the Beta distribution

    Kipping, David M.

    2013-01-01

    It is suggested that the distribution of orbital eccentricities for extrasolar planets is well-described by the Beta distribution. Several properties of the Beta distribution make it a powerful tool for this purpose. For example, the Beta distribution can reproduce a diverse range of probability density functions (PDFs) using just two shape parameters (a and b). We argue that this makes it ideal for serving as a parametric model in Bayesian comparative population analysis. The Beta distributi...

  10. The double gluon distribution from the single gluon distribution

    Golec-Biernat, Krzysztof; Serino, Mirko; Snyder, Zachary; Stasto, Anna

    2016-01-01

    Using momentum sum rule for evolution equations for Double Parton Distribution Functions (DPDFs) in the leading logarithmic approximation, we find that the double gluon distribution function can be uniquely constrained via the single gluon distribution function. We also study numerically its evolution with a hard scale and show that an approximately factorized ansatz into the product of two single gluon distributions performs quite well at small values of $x$ but is always violated for larger values, as expected.

  11. Loss Allocation in a Distribution System with Distributed Generation Units

    Lund, Torsten; Nielsen, Arne Hejde; Sørensen, Poul Ejnar

    2007-01-01

    In Denmark, a large part of the electricity is produced by wind turbines and combined heat and power plants (CHPs). Most of them are connected to the network through distribution systems. This paper presents a new algorithm for allocation of the losses in a distribution system with distributed...

  12. Distributed Wind Energy in Idaho

    Gardner, John [Boise State Univ., ID (United States); Johnson, Kathryn [Colorado School of Mines, Golden, CO (United States); Haynes, Todd [Boise State Univ., ID (United States); Seifert, Gary [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2009-01-31

    This project is a research and development program aimed at furthering distributed wind technology. In particular, this project addresses some of the barriers to distributed wind energy utilization in Idaho.

  13. Distributed mobility management - framework & analysis

    Liebsch, M.; Seite, P.; Karagiannis, G.

    2013-01-01

    Mobile operators consider the distribution of mobility anchors to enable offloading some traffic from their core network. The Distributed Mobility Management (DMM) Working Group is investigating the impact of decentralized mobility management to existing protocol solutions, while taking into account

  14. Distributed charging of electrical assets

    Ghosh, Soumyadip; Phan, Dung; Sharma, Mayank; Wu, Chai Wah; Xiong, Jinjun

    2016-02-16

    The present disclosure relates generally to the field of distributed charging of electrical assets. In various examples, distributed charging of electrical assets may be implemented in the form of systems, methods and/or algorithms.

  15. Distributed Project Work

    Borch, Ole; Kirkegaard, B.; Knudsen, Morten;

    1998-01-01

    Project work has been used for many years at Aalborg University to improve learning of theory and methods given in courses. In a closed environment where the students are forming a group in a single room, the interaction behaviour is more or less given from the natural life. Group work...... in a distributed fashion over the Internet needs more attention to the interaction protocol since the physical group room is not existing. The purpose in this paper is to develop a method for online project work by using the product: Basic Support for Cooperative Work (BSCV). An analysis of a well-proven protocol...... for information exchange in the traditional project environment is performed. A group of teachers and a student group using small project examples test the method. The first test group used a prototype for testing and found the new activity synchronization difficult to adapt, so the method was finally adjusted...

  16. Contracts in distributed systems

    Massimo Bartoletti

    2011-07-01

    Full Text Available We present a parametric calculus for contract-based computing in distributed systems. By abstracting from the actual contract language, our calculus generalises both the contracts-as-processes and contracts-as-formulae paradigms. The calculus features primitives for advertising contracts, for reaching agreements, and for querying the fulfilment of contracts. Coordination among principals happens via multi-party sessions, which are created once agreements are reached. We present two instances of our calculus, by modelling contracts as (i processes in a variant of CCS, and (ii as formulae in a logic. With the help of a few examples, we discuss the primitives of our calculus, as well as some possible variants.

  17. Process evaluation distributed system

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  18. Distributed road assessment system

    Beer, N. Reginald; Paglieroni, David W

    2014-03-25

    A system that detects damage on or below the surface of a paved structure or pavement is provided. A distributed road assessment system includes road assessment pods and a road assessment server. Each road assessment pod includes a ground-penetrating radar antenna array and a detection system that detects road damage from the return signals as the vehicle on which the pod is mounted travels down a road. Each road assessment pod transmits to the road assessment server occurrence information describing each occurrence of road damage that is newly detected on a current scan of a road. The road assessment server maintains a road damage database of occurrence information describing the previously detected occurrences of road damage. After the road assessment server receives occurrence information for newly detected occurrences of road damage for a portion of a road, the road assessment server determines which newly detected occurrences correspond to which previously detected occurrences of road damage.

  19. Blood distribution measurements

    The necessity of employing a vascular exploration technique, which is non-aggressive and repetitive, and which gives total and quantitative results led to the exploitation of a rheo-graphic method. An apparatus was constructed for making such measurements. Some appropriate statistics were subsequently determined which allowed the law concerning the establishment of a circulatory index to be determined as well as its statistical distribution and pathological threshold. The results of an examination, which are presented graphically, led to the establishment of a technique (known as cartography) giving the state of the circulation in a member. The application of this technique to persons affected with arteritis allowed the validity of the law and the previously established thresholds to be verified. The apparatus was completely automated and thus gives results which are entirely objective. (author)

  20. Nuclear Parton Distribution Functions

    I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens

    2009-06-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  1. Carotenoid Distribution in Nature.

    Alcaíno, Jennifer; Baeza, Marcelo; Cifuentes, Víctor

    2016-01-01

    Carotenoids are naturally occurring red, orange and yellow pigments that are synthesized by plants and some microorganisms and fulfill many important physiological functions. This chapter describes the distribution of carotenoid in microorganisms, including bacteria, archaea, microalgae, filamentous fungi and yeasts. We will also focus on their functional aspects and applications, such as their nutritional value, their benefits for human and animal health and their potential protection against free radicals. The central metabolic pathway leading to the synthesis of carotenoids is described as the three following principal steps: (i) the synthesis of isopentenyl pyrophosphate and the formation of dimethylallyl pyrophosphate, (ii) the synthesis of geranylgeranyl pyrophosphate and (iii) the synthesis of carotenoids per se, highlighting the differences that have been found in several carotenogenic organisms and providing an evolutionary perspective. Finally, as an example, the synthesis of the xanthophyll astaxanthin is discussed. PMID:27485217

  2. Density Distribution Sunflower Plots

    William D. Dupont

    2003-01-01

    Full Text Available Density distribution sunflower plots are used to display high-density bivariate data. They are useful for data where a conventional scatter plot is difficult to read due to overstriking of the plot symbol. The x-y plane is subdivided into a lattice of regular hexagonal bins of width w specified by the user. The user also specifies the values of l, d, and k that affect the plot as follows. Individual observations are plotted when there are less than l observations per bin as in a conventional scatter plot. Each bin with from l to d observations contains a light sunflower. Other bins contain a dark sunflower. In a light sunflower each petal represents one observation. In a dark sunflower, each petal represents k observations. (A dark sunflower with p petals represents between /2-pk k and /2+pk k observations. The user can control the sizes and colors of the sunflowers. By selecting appropriate colors and sizes for the light and dark sunflowers, plots can be obtained that give both the overall sense of the data density distribution as well as the number of data points in any given region. The use of this graphic is illustrated with data from the Framingham Heart Study. A documented Stata program, called sunflower, is available to draw these graphs. It can be downloaded from the Statistical Software Components archive at http://ideas.repec.org/c/boc/bocode/s430201.html . (Journal of Statistical Software 2003; 8 (3: 1-5. Posted at http://www.jstatsoft.org/index.php?vol=8 .

  3. CMCC Data Distribution Centre

    Aloisio, Giovanni; Fiore, Sandro; Negro, A.

    2010-05-01

    The CMCC Data Distribution Centre (DDC) is the primary entry point (web gateway) to the CMCC. It is a Data Grid Portal providing a ubiquitous and pervasive way to ease data publishing, climate metadata search, datasets discovery, metadata annotation, data access, data aggregation, sub-setting, etc. The grid portal security model includes the use of HTTPS protocol for secure communication with the client (based on X509v3 certificates that must be loaded into the browser) and secure cookies to establish and maintain user sessions. The CMCC DDC is now in a pre-production phase and it is currently used only by internal users (CMCC researchers and climate scientists). The most important component already available in the CMCC DDC is the Search Engine which allows users to perform, through web interfaces, distributed search and discovery activities by introducing one or more of the following search criteria: horizontal extent (which can be specified by interacting with a geographic map), vertical extent, temporal extent, keywords, topics, creation date, etc. By means of this page the user submits the first step of the query process on the metadata DB, then, she can choose one or more datasets retrieving and displaying the complete XML metadata description (from the browser). This way, the second step of the query process is carried out by accessing to a specific XML document of the metadata DB. Finally, through the web interface, the user can access to and download (partially or totally) the data stored on the storage device accessing to OPeNDAP servers and to other available grid storage interfaces. Requests concerning datasets stored in deep storage will be served asynchronously.

  4. The Saguaro distributed operating system

    Andrews, Gregory R.; Schlichting, Richard D.

    1989-05-01

    The progress achieved over the final year of the Saguaro distributed operating system project is presented. The primary achievements were in related research, including SR distributed programming language, the MLP system for constructing distributed mixed-language programs, the Psync interprocess communication mechanism, a configurable operating system kernal called the x-kernal, and the development of language mechanisms for performing failure handling in distributed programming languages.

  5. Parton distribution functions of hadrons

    The determination of the universal parton distribution functions in the QCD framework is connected to all aspects of rigorous tests of the Standard Model. Knowledge of distribution functions allows the formulation of predictions on anticipated measurements at much highr energies than are currently achievable. Topics covered in this discussion include QCD formalism for hard processes, parton distribution functions, reference processes, global analyses, and a survey of recent parton distributions. 102 refs., 5 figs

  6. Interpreting spotted dolphin age distributions

    Barlow, Jay; Hohn, Aleta A.

    1984-01-01

    Previous work has determined the age distribution from a sample of spotted dolphins (Stenella attenuata) killed in the eastern Pacific tuna purse-seine fishery. In this paper we examine the usefulness of this age distribution for estimating natural mortality rates. The observed age distribution has a deficiency of individuals from 5-15 years and cannot represent a stable age distribution. Sampling bias and errors in age interpretation are examined as possible causes of the "dip" in the obs...

  7. Distributed Streaming with Finite Memory

    Neven, Frank; Schweikardt, Nicole; Servais, Frédéric; Tan, Tony

    2015-01-01

    We introduce three formal models of distributed systems for query evaluation on massive databases: Distributed Streaming with Register Automata (DSAs), Distributed Streaming with Register Transducers (DSTs), and Distributed Streaming with Register Transducers and Joins (DSTJs). These models are based on the key-value paradigm where the input is transformed into a dataset of key-value pairs, and on each key a local computation is performed on the values associated with that key resulting in an...

  8. Distributed terascale volume visualization using distributed shared virtual memory

    Beyer, Johanna

    2011-10-01

    Table 1 illustrates the impact of different distribution unit sizes, different screen resolutions, and numbers of GPU nodes. We use two and four GPUs (NVIDIA Quadro 5000 with 2.5 GB memory) and a mouse cortex EM dataset (see Figure 2) of resolution 21,494 x 25,790 x 1,850 = 955GB. The size of the virtual distribution units significantly influences the data distribution between nodes. Small distribution units result in a high depth complexity for compositing. Large distribution units lead to a low utilization of GPUs, because in the worst case only a single distribution unit will be in view, which is rendered by only a single node. The choice of an optimal distribution unit size depends on three major factors: the output screen resolution, the block cache size on each node, and the number of nodes. Currently, we are working on optimizing the compositing step and network communication between nodes. © 2011 IEEE.

  9. Universal features of multiplicity distributions

    Universal features of multiplicity distributions are studied and combinants, certain linear combinations of ratios of probabilities, are introduced. It is argued that they can be a useful tool in analyzing multiplicity distributions of hadrons emitted in high energy collisions and large scale structure of galaxy distributions

  10. The Weibull distribution a handbook

    Rinne, Horst

    2008-01-01

    The Most Comprehensive Book on the SubjectChronicles the Development of the Weibull Distribution in Statistical Theory and Applied StatisticsExploring one of the most important distributions in statistics, The Weibull Distribution: A Handbook focuses on its origin, statistical properties, and related distributions. The book also presents various approaches to estimate the parameters of the Weibull distribution under all possible situations of sampling data as well as approaches to parameter and goodness-of-fit testing.Describes the Statistical Methods, Concepts, Theories, and Applications of T

  11. Practical quasi parton distribution functions

    Ishikawa, Tomomi; Qiu, Jian-Wei; Yoshida, Shinsuke

    2016-01-01

    A completely new strategy to calculate parton distribution functions on the lattice has recently been proposed. In this method, lattice calculable observables, called quasi distributions, are related to normal distributions. The quasi distributions are known to contain power-law UV divergences arise from a Wilson line in the non-local operator, while the normal distributions only have logatithmic UV divergences. We propose possible method to subtract the power divegence to make the matching of the quasi with the normal distributions well-defined. We also demonstrate the matching of the quasi quark distribution between continuum and lattice implementing the power divergence subtraction. The matching calculations are carried out by one-loop perturbation.

  12. LHCb Distributed Conditions Database

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  13. Distributed Deliberative Recommender Systems

    Recio-García, Juan A.; Díaz-Agudo, Belén; González-Sanz, Sergio; Sanchez, Lara Quijano

    Case-Based Reasoning (CBR) is one of most successful applied AI technologies of recent years. Although many CBR systems reason locally on a previous experience base to solve new problems, in this paper we focus on distributed retrieval processes working on a network of collaborating CBR systems. In such systems, each node in a network of CBR agents collaborates, arguments and counterarguments its local results with other nodes to improve the performance of the system's global response. We describe D2ISCO: a framework to design and implement deliberative and collaborative CBR systems that is integrated as a part of jcolibritwo an established framework in the CBR community. We apply D2ISCO to one particular simplified type of CBR systems: recommender systems. We perform a first case study for a collaborative music recommender system and present the results of an experiment of the accuracy of the system results using a fuzzy version of the argumentation system AMAL and a network topology based on a social network. Besides individual recommendation we also discuss how D2ISCO can be used to improve recommendations to groups and we present a second case of study based on the movie recommendation domain with heterogeneous groups according to the group personality composition and a group topology based on a social network.

  14. Distributed Project Work

    Borch, Ole; Kirkegaard, B.; Knudsen, Morten;

    1998-01-01

    Project work has been used for many years at Aalborg University to improve learning of theory and methods given in courses. In a closed environment where the students are forming a group in a single room, the interaction behaviour is more or less given from the natural life. Group work in a distr......Project work has been used for many years at Aalborg University to improve learning of theory and methods given in courses. In a closed environment where the students are forming a group in a single room, the interaction behaviour is more or less given from the natural life. Group work...... in a distributed fashion over the Internet needs more attention to the interaction protocol since the physical group room is not existing. The purpose in this paper is to develop a method for online project work by using the product: Basic Support for Cooperative Work (BSCV). An analysis of a well-proven protocol...... for information exchange in the traditional project environment is performed. A group of teachers and a student group using small project examples test the method. The first test group used a prototype for testing and found the new activity synchronization difficult to adapt, so the method was finally adjusted...

  15. Electronic procedure distribution

    Printed procedures can offer a mix of text and graphic information that improves readability and increases understanding. A typical procedure uses illustrations and graphics to clarify concepts, a variety of type styles and weights to make it easier to find different topics and sections, white space to improve readability, and familiar navigational clues such as page numbers and topic headers. Initially, electronic procedure systems had limited typeface options, often only a single typeface, with no capability for enhancing readability by varying type size bolding, italicizing, or underlining, and no ability to include graphics. Even recently, many text-only electronic procedures were originally created in a modern What-You-See-Is-What-You-Get (WYSI-WYG) document authoring system, only to be converted to pages and pages of plain type for electronic distribution. Given the choice of paper or on-line producers, most users have chosen paper for its readability. But current-generation electronic document systems that use formatted text and embedded graphics offer users vastly improved readability. Further, they are offering ever-better search tools to enable rapid location of material of interest

  16. Distributed Merge Trees

    Morozov, Dmitriy; Weber, Gunther

    2013-01-08

    Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

  17. LHCb distributed conditions database

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  18. LHCb distributed conditions database

    Clemencic, M.

    2008-07-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here.

  19. Distribution effects of electricity tax illustrated by different distribution concepts

    This study demonstrates the significance of the choice of distribution concepts in analyses of distribution effects of electricity tax. By distribution effects are meant that life circumstances are changing. The focus is on different income concepts. Income is an important element in the life circumstances of the households. The distribution effects are studied by focusing on general income before and after tax, pension able earnings before and after tax and total consumption expenditure. The authors study how increased electricity expenses caused by a proportional increase of the electricity tax affect the households in various income groups. It is found that the burden of such an increased tax, measured by the budget part set aside for electricity, decreases with income no matter what distribution concept is used. By calculating measures of inequality for income minus electricity tax before and after the tax increase, it is found that the measures of inequality significantly depend on the choice of distribution concept

  20. Free 3-distributions: holonomy, Fefferman constructions and dual distributions

    Armstrong, Stuart

    2007-01-01

    This paper analyses the parabolic geometries generated by a free 3-distribution in the tangent space of a manifold. It shows the existence of normal Fefferman constructions over CR and Lagrangian contact structures corresponding to holonomy reductions to SO(4,2) and SO(3,3), respectively. There is also a fascinating construction of a `dual' distribution when the holonomy reduces to $G_2'$. The paper concludes with some holonomy constructions for free $n$-distributions for $n>3$.

  1. DISTRIBUTION OF LOAD USING MOBILE AGENT IN DISTRIBUTED WEB SERVERS

    VijayaKumar G. Dhas; V. Rhymend Uthariaraj

    2014-01-01

    The continuing growth of the World-Wide Web is placing increasing demands on popular Web servers. Many of the sites are now using distributed Web servers (i.e., groups of machines) to service the increasing number of client requests, as a single server cannot handle the workload. Incoming client requests must be distributed in some fashion among the machines in the distributed Web server to improve the performance. In the existing work, reducing the high message complexity is a challenge. Thi...

  2. Multicriteria Reconfiguration of Distribution Network with Distributed Generation

    N.I. Voropai; Bat-Undraal, B.

    2012-01-01

    The paper addresses the problem of multicriteria reconfiguration of distribution network with distributed generation according to the criterion of minimum power loss under normal conditions and the criterion of power supply reliability under postemergency conditions. Efficient heuristic and multicriteria methods are used to solve the problem including advanced ant colony algorithm for minimum loss reconfiguration of distribution network, the sorting-out algorithm of cell formation for island ...

  3. Multiplicity distributions for $e^{+}e^{-}$ collisions using Weibull distribution

    Dash, Sadhana; Sett, Priyanka

    2016-01-01

    The two parameters Weibull function is used to describe the charged particle multiplicity distribution in $e^{+}e^{-}$ collisions at the highest available energy measured by TASSO and ALEPH experiments. The Weibull distribution has wide applications in naturally evolving processes based on fragmentation and sequential branching. The Weibull model describes the multiplicity distribution very well, as particle production processes involve QCD parton fragmentation. The effective energy model of particle production was verified using Weibull parameters and the same was used to predict the multiplicity distribution in $e^{+}e^{-}$ collisions at future collider energies.

  4. Voltage regulation in distribution networks with distributed generation

    Blažič, B.; Uljanić, B.; Papič, I.

    2012-11-01

    The paper deals with the topic of voltage regulation in distribution networks with relatively high distributed energy resources (DER) penetration. The problem of voltage rise is described and different options for voltage regulation are given. The influence of DER on voltage profile and the effectiveness of the investigated solutions are evaluated by means of simulation in DIgSILENT. The simulated network is an actual distribution network in Slovenia with a relatively high penetration of distributed generation. Recommendations for voltage control in networks with DER penetration are given at the end.

  5. NADS - Nuclear And Atomic Data System

    We have developed NADS (Nuclear and Atomic Data System), a web-based graphical interface for viewing pointwise and grouped cross-sections and distributions. Our implementation is a client / server model. The client is a Java applet that displays the graphical interface, which has interactive 2-D, 3-D, and 4-D plots and tables. The server, which can serve and perform computations the data, has been implemented in Python using the FUDGE package developed by Bret Beck at LLNL. Computational capabilities include algebraic manipulation of nuclear evaluated data in databases such as LLNL's ENDL-99, ENDF/B-V and ENDF/B-VI as well as user data. Processed data used in LLNL's transport codes are accessible as well. NADS is available from http://nuclear.llnl.gov/

  6. Scalable Integrated Multi-Mission Support System Simulator Release 3.0

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.

  7. Implementation of the Web-based laboratory

    Ying, Liu; Li, Xunbo

    2005-12-01

    With the rapid developments of Internet technologies, remote access and control via Internet is becoming a reality. A realization of the web-based laboratory (the W-LAB) was presented. The main target of the W-LAB was to allow users to easily access and conduct experiments via the Internet. While realizing the remote communication, a system, which adopted the double client-server architecture, was introduced. It ensures the system better security and higher functionality. The experimental environment implemented in the W-Lab was integrated by both virtual lab and remote lab. The embedded technology in the W-LAB system as an economical and efficient way to build the distributed infrastructural network was introduced. Furthermore, by introducing the user authentication mechanism in the system, it effectively secures the remote communication.

  8. The Lawrence Livermore National Laboratory Intelligent Actinide Analysis System

    The authors have developed an Intelligent Actinide Analysis System (IAAS) for Materials Management to use in the Plutonium Facility at the Lawrence Livermore National Laboratory. The IAAS will measure isotopic ratios for plutonium and other actinides non-destructively by high-resolution gamma-ray spectrometry. This system will measure samples in a variety of matrices and containers. It will provide automated control of many aspects of the instrument that previously required manual intervention and/or control. The IAAS is a second-generation instrument, based on the authors' experience in fielding gamma isotopic systems, that is intended to advance non-destructive actinide analysis for nuclear safeguards in performance, automation, ease of use, adaptability, systems integration and extensibility to robotics. It uses a client-server distributed monitoring and control architecture. The IAAS uses MGA3 as the isotopic analysis code. The design of the IAAS reduces the need for operator intervention, operator training, and operator exposure

  9. The Lawrence Livermore National Laboratory Intelligent Actinide Analysis System

    The authors have developed an Intelligent Actinide Analysis System (IAAS) for Materials Management to use in the Plutonium Facility at the Lawrence Livermore National Laboratory. The IAAS will measure isotopic ratios for plutonium and other actinides non-destructively by high-resolution gamma-ray spectrometry. This system will measure samples in a variety of matrices and containers. It will provide automated control of many aspects of the instrument that previously required manual intervention and/or control. The IAAS is a second-generation instrument, based on experience in fielding gamma isotopic systems, that is intended to advance non-destructive actinide analysis for nuclear safeguards in performance, automation, ease of use, adaptability, systems integration and extensibility to robotics. It uses a client-server distributed monitoring and control architecture. The IAAS uses MGA as the isotopic analysis code. The design of the IAAS reduces the need for operator intervention, operator training, and operator exposure

  10. The new slow control system for the ALEPH experiment at LEP

    The ALEPH slow control system consists of 7000 channels distributed over 35 networked microprocessors which are used to control and monitor the experimental apparatus. To improve the performance, the readout has been upgraded from a ROM based system to use 3U VME processors, running OS9. A new object- oriented design for the software has been implemented, where the full description of the system is held centrally in a relational database on the host VAX cluster. Control and monitoring is carried out through a library, which accesses the database and handles communications with the processors over Ethernet using a client-server model. The design and implementation of the system and initial experience with its use are described. ((orig.))

  11. RemoteLabs Platform

    Nils Crabeel

    2012-03-01

    Full Text Available This paper reports on a first step towards the implementation of a framework for remote experimentation of electric machines – the RemoteLabs platform. This project was focused on the development of two main modules: the user Web-based and the electric machines interfaces. The Web application provides the user with a front-end and interacts with the back-end – the user and experiment persistent data. The electric machines interface is implemented as a distributed client server application where the clients, launched by the Web application, interact with the server modules located in platforms physically connected the electric machines drives. Users can register and authenticate, schedule, specify and run experiments and obtain results in the form of CSV, XML and PDF files. These functionalities were successfully tested with real data, but still without including the electric machines. This inclusion is part of another project scheduled to start soon.

  12. Accessing HEP Collaboration documents using WWW and WAIS

    WAIS stands for Wide Area Information Server. It is a distributed information retrieval system. A WAIS system has a client-server architecture which consists of clients talking to a server via a TCP/IP network using the ANSI standard Z39-50 VI protocol. A freely available version (FreeWAIS) is supported by the Clearinghouse for Networked Information Discovery and Retrieval, also known as CNIDR. FreeWAIS-sf, which is the software the authors are using at Fermilab, is an extension of FreeWAIS. FreeWAIS-sf supports all the functionalities which FreeWAIS offers as well as additional indexing and searching capabilities for structured fields. World Wide Web (WWW) was originally developed by Tim Berners-Lee at CERN and is now the backbone for serving information on Internet. Here, the authors describe a system for accessing HEP collaboration documents using WWW and WAIS

  13. ADVANTAGES OF AN INFORMATION SYSTEM MONITORING AND STOCKS AGRICULTURAL PRICES. CASE STUDY – ROSIM

    Elena COFAS

    2013-01-01

    Full Text Available Abstract agricultural policy in our country is based on information dispersed, especially because there is no centralized monitoring system, who to provide reliable information, while the agricultural and food market is experiencing a general feeling of instability - basically, it consists of channels and a dysfunctional organizational structure, based on communication systems do not operate in real time.. An integrated on-line monitoring of prices of agricultural products is of great interest due to the integration of computer technology (communications and agricultural sciences, based on specific concepts: client / server architecture, the integrated platform software, decision support, database distributed relational distance communication through the web, object oriented programming, mathematical modeling, interactivity etc..

  14. Trends in AFIS technology: past, present, and future

    Cardwell, Guy; Bavarian, Behnam

    1997-01-01

    Automated Fingerprint Identification has a history of more than 20 years. In the last 5 years, there has been an explosion of technologies that have dramatically changed the face of AFIS. Few other engineering and science fields offer such a widespread use of technology as does computerized fingerprint recognition. Optics, computer vision, computer graphics, artificial intelligence, artificial neural networks, parallel processing, distributed client server applications, fault tolerant computing, scaleable architectures, local and wide area networking, mass storage, databases, are a few of the fields that have made quantum leaps in recent years. All of these improvements have a dramatic effect on the size, speed, and accuracy of automated fingerprint identification systems. ThIs paper offers a historical overview of these trends and discuss the state of the art. It culminates with an overview an educated forecast on future systems, especially those 'real time' systems for use in area of law enforcement and civil/commercial applications.

  15. Proceedings of the REXX symposium for developers and users

    This report contains viewgraphs on the following topics: REXX 1995 -- the growth of a language; the future of REXX; problems and issues writing REXX compliers; writing CGI scripts for WWW using REXX; object REXX: up close and personal; object REXX: openDoc support; report from the X3J18 committee; Centerpiece and object oriented REXX; REXX, distributed systems and objects; getting ready for object REXX; SOM -- present and future; rexinda; REXX for CICS/ESA; REXX changes in OS/2 warp; S/REXX by Benaroya; a REXX-based stock exchange real-time client/server environment for research, educational and public relations purposes: implementation and usage issues; REXX/370 Compiler and Library 1995; and how REXX helped me hit the ground running in UNIX

  16. Architecture of A Scalable Dynamic Parallel WebCrawler with High Speed Downloadable Capability for a Web Search Engine

    Mukhopadhyay, Debajyoti; Ghosh, Soumya; Kar, Saheli; Kim, Young-Chon

    2011-01-01

    Today World Wide Web (WWW) has become a huge ocean of information and it is growing in size everyday. Downloading even a fraction of this mammoth data is like sailing through a huge ocean and it is a challenging task indeed. In order to download a large portion of data from WWW, it has become absolutely essential to make the crawling process parallel. In this paper we offer the architecture of a dynamic parallel Web crawler, christened as "WEB-SAILOR," which presents a scalable approach based on Client-Server model to speed up the download process on behalf of a Web Search Engine in a distributed Domain-set specific environment. WEB-SAILOR removes the possibility of overlapping of downloaded documents by multiple crawlers without even incurring the cost of communication overhead among several parallel "client" crawling processes.

  17. CMS Data Analysis: Current Status and Future Strategy

    Innocente, V

    2003-01-01

    We present the current status of CMS data analysis architecture and describe work on future Grid-based distributed analysis prototypes. CMS has two main software frameworks related to data analysis: COBRA, the main framework, and IGUANA, the interactive visualisation framework. Software using these frameworks is used today in the world-wide production and analysis of CMS data. We describe their overall design and present examples of their current use with emphasis on interactive analysis. CMS is currently developing remote analysis prototypes, including one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by CMS physicists will guide us in forming a Grid-enriched analysis strategy. The status of this work is presented, as is an outline of how we plan to leverage the power of our existing frameworks in the migration of CMS software to the Grid.

  18. ATLAS Tags Web Service calls Athena via Athenaeum Framework

    Hrivnac, J; The ATLAS collaboration

    2010-01-01

    High Energy Physics experiments start using a Web Service style application to access functionality of their main frameworks. Those frameworks, however, are not ready to be executed in a standard Web Service environment as frameworks are too complex, monolithic and use non-standard and non-portable technologies. ATLAS Tag Browser is one of those Web Service. To provide the possibility to extract full ATLAS events from the standard Web Service, we need to access to full ATLAS offline framework - Athena. As Athena cannot run directly within any Web Service, the client server approach has been chosen. Web Service calls Athena remotely over XML-RPC connection using Athenaeum framework. The paper will discuss integration of Athenaeum framework with ATLAS Tag database service, its distributed deployment, monitoring and performance.

  19. Future directions in controlling the LAMPF-PSR accelerator complex at Los Alamos National Laboratory

    Four interrelated projects are underway whose purpose is to migrate the LAMPF-PSR Accelerator Complex control systems to a system with a common set of hardware and software components. Project goals address problems in performance, maintenance and growth potential. Front-end hardware, operator interface hardware and software, computer systems, network systems and data system software are being simultaneously upgraded as part of these efforts. The efforts are being coordinated to provide for a smooth and timely migration to a client-server model-based data acquisition and control system. An increased use of distributed intelligence at both the front-end and the operator interface is a key element of the projects. (author)

  20. The X-Files Investigating Alien Performance in a Thin-client World

    Gunther, N J

    2000-01-01

    Many scientific applications use the X11 window environment; an open source windows GUI standard employing a client/server architecture. X11 promotes: distributed computing, thin-client functionality, cheap desktop displays, compatibility with heterogeneous servers, remote services and administration, and greater maturity than newer web technologies. This paper details the author's investigations into close encounters with alien performance in X11-based seismic applications running on a 200-node cluster, backed by 2 TB of mass storage. End-users cited two significant UFOs (Unidentified Faulty Operations) i) long application launch times and ii) poor interactive response times. The paper is divided into three major sections describing Close Encounters of the 1st Kind: citings of UFO experiences, the 2nd Kind: recording evidence of a UFO, and the 3rd Kind: contact and analysis. UFOs do exist and this investigation presents a real case study for evaluating workload analysis and other diagnostic tools.

  1. Communication System for CIMS Application Integration Platform

    1999-01-01

    CIMS has seen the growth of multiple incompatible hardware architectur es, each architecture supporting several incompatible operating systems, and eac h platform operating with various incompatible development tools (e.g., programm ing language compilers, DBMS, etc.) and one or more incompatible graphic user in terfaces. Also, the growth of the Internet, the World-Wide Web, has introduced new dimensio ns of complexity into the development process. All of these must be dealt with a s the application is made workable in a distributed client-server environment. This paper outlines the architecture of a communication system for the CIMS appl ication integration platform. The communication system makes possible the reque st for service across heterogeneous platforms and networks, and provides some co mmon solutions to issues common to CIMS applications.

  2. NADS - Nuclear And Atomic Data System

    McKinley, M S; Beck, B; McNabb, D

    2004-09-17

    We have developed NADS (Nuclear and Atomic Data System), a web-based graphical interface for viewing pointwise and grouped cross-sections and distributions. Our implementation is a client / server model. The client is a Java applet that displays the graphical interface, which has interactive 2-D, 3-D, and 4-D plots and tables. The server, which can serve and perform computations the data, has been implemented in Python using the FUDGE package developed by Bret Beck at LLNL. Computational capabilities include algebraic manipulation of nuclear evaluated data in databases such as LLNL's ENDL-99, ENDF/B-V and ENDF/B-VI as well as user data. Processed data used in LLNL's transport codes are accessible as well. NADS is available from http://nuclear.llnl.gov/

  3. NADS — Nuclear and Atomic Data System

    McKinley, Michael S.; Beck, Bret; McNabb, Dennis

    2005-05-01

    We have developed NADS (Nuclear and Atomic Data System), a web-based graphical interface for viewing pointwise and grouped cross sections and distributions. Our implementation is a client / server model. The client is a Java applet that displays the graphical interface, which has interactive 2-D, 3-D, and 4-D plots and tables. The server, which can serve and perform computations of the data, has been implemented in Python using the FUDGE package developed by Bret Beck at LLNL. Computational capabilities include algebraic manipulation of nuclear evaluated data in databases such as LLNL's ENDL-99, ENDF/B-V, and ENDF/B-VI, as well as user data. Processed data used in LLNL's transport codes are accessible as well. NADS is available from http://nuclear.llnl.gov/.

  4. A Software Architecture for High Level Applications

    A modular software platform for high level applications is under development at the National Synchrotron Light Source II project. This platform is based on client-server architecture, and the components of high level applications on this platform will be modular and distributed, and therefore reusable. An online model server is indispensable for model based control. Different accelerator facilities have different requirements for the online simulation. To supply various accelerator simulators, a set of narrow and general application programming interfaces is developed based on Tracy-3 and Elegant. This paper describes the system architecture for the modular high level applications, the design of narrow and general application programming interface for an online model server, and the prototype of online model server.

  5. SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices

    Chao Wang

    2014-01-01

    Full Text Available This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server’s main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users’ regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices.

  6. SmartMal: a service-oriented behavioral malware detection framework for mobile devices.

    Wang, Chao; Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C K

    2014-01-01

    This paper presents SmartMal--a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. PMID:25165729

  7. System for Multicast File Transfer

    Dorin Custura

    2012-03-01

    Full Text Available The distribution of big files over the network from a single source to a large number of recipients is not efficient by using standard client-server or even peer-to peer file transfer protocols.  Thus, the transfer of a hierarchy of big files to multiple destinations can be optimized in terms of bandwidth usage and data storage reads by using multicast networking. In order to achieve that, a simple application layer protocol can be imagined. It uses multicast UDP as transport and it provides a mechanism for data ordering and retransmission. Some security problems are also considered in this protocol, because at this time the Internet standards supporting multicast security are still in the development stage.

  8. Distributed Observer Network

    Conroy, Michael; Mazzone, Rebecca; Little, William; Elfrey, Priscilla; Mann, David; Mabie, Kevin; Cuddy, Thomas; Loundermon, Mario; Spiker, Stephen; McArthur, Frank; Srey, Tate; Bonilla, Dennis

    2010-01-01

    The Distributed Observer network (DON) is a NASA-collaborative environment that leverages game technology to bring three-dimensional simulations to conventional desktop and laptop computers in order to allow teams of engineers working on design and operations, either individually or in groups, to view and collaborate on 3D representations of data generated by authoritative tools such as Delmia Envision, Pro/Engineer, or Maya. The DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3D visual environment. DON has been designed to enhance accessibility and user ability to observe and analyze visual simulations in real time. A variety of NASA mission segment simulations [Synergistic Engineering Environment (SEE) data, NASA Enterprise Visualization Analysis (NEVA) ground processing simulations, the DSS simulation for lunar operations, and the Johnson Space Center (JSC) TRICK tool for guidance, navigation, and control analysis] were experimented with. Desired functionalities, [i.e. Tivo-like functions, the capability to communicate textually or via Voice-over-Internet Protocol (VoIP) among team members, and the ability to write and save notes to be accessed later] were targeted. The resulting DON application was slated for early 2008 release to support simulation use for the Constellation Program and its teams. Those using the DON connect through a client that runs on their PC or Mac. This enables them to observe and analyze the simulation data as their schedule allows, and to review it as frequently as desired. DON team members can move freely within the virtual world. Preset camera points can be established, enabling team members to jump to specific views. This improves opportunities for shared analysis of options, design reviews, tests, operations, training, and evaluations, and improves prospects for verification of requirements, issues, and approaches among dispersed teams.

  9. The Distributed-SDF Domain

    Cuadrado, Daniel Lázaro; Ravn, Anders Peter; Koch, Peter

    2005-01-01

    remotely, virtual connections (over the network) are made among them in a way that mimics the ones in the original model, making the communication between actors decentralized. One of the architectural issues is how to identify the receivers in a distributed setup. A receiver that resides locally in a...... model can be distinctly identified but not when is distributed. To solve this, the receiver class has been extended with a tag. Once the actors are distributed and virtually connected, the distributed platform is ready to start the simulation. The Distributed-SDF director orchestrates the execution of...... the scheduler in a centralized manner. In order to allow parallel executions of actors, it creates threads locally to control every distributed actor. This allows to simultaneously firing those actors that are ready and have no interdependencies. Since several actors can be executing at the same time...

  10. Degree distribution in discrete case

    Vertex degree of many network models and real-life networks is limited to non-negative integer. By means of measure and integral, the relation of the degree distribution and the cumulative degree distribution in discrete case is analyzed. The degree distribution, obtained by the differential of its cumulative, is only suitable for continuous case or discrete case with constant degree change. When degree change is not a constant but proportional to degree itself, power-law degree distribution and its cumulative have the same exponent and the mean value is finite for power-law exponent greater than 1. -- Highlights: → Degree change is the crux for using the cumulative degree distribution method. → It suits for discrete case with constant degree change. → If degree change is proportional to degree, power-law degree distribution and its cumulative have the same exponent. → In addition, the mean value is finite for power-law exponent greater than 1.

  11. Sales Distribution of Consumer Electronics

    Hisano, Ryohei

    2010-01-01

    Using the uniform most powerful unbiased test, we observed the sales distribution of consumer electronics in Japan on a daily basis and report that it follows both a lognormal distribution and a power-law distribution and depends on the state of the market. We show that these switches occur quite often. The underlying sales dynamics found between both periods nicely matched a multiplicative process. However, even though the multiplicative term in the process displays a size-dependent relationship when a steady lognormal distribution holds, it shows a size-independent relationship when the power-law distribution holds. This difference in the underlying dynamics is responsible for the difference in the two observed distributions.

  12. Distribution Principle of Bone Tissue

    Fan, Yifang; Fan, Yubo; Xu, Zongxiang; Li, Zhiyu

    2009-01-01

    Using the analytic and experimental techniques we present an exploratory study of the mass distribution features of the high coincidence of centre of mass of heterogeneous bone tissue in vivo and its centroid of geometry position. A geometric concept of the average distribution radius of bone issue is proposed and functional relation of this geometric distribution feature between the partition density and its relative tissue average distribution radius is observed. Based upon the mass distribution feature, our results suggest a relative distance assessment index between the center of mass of cortical bone and the bone center of mass and establish a bone strength equation. Analysing the data of human foot in vivo, we notice that the mass and geometric distribution laws have expanded the connotation of Wolff's law, which implies a leap towards the quantitative description of bone strength. We finally conclude that this will not only make a positive contribution to help assess osteoporosis, but will also provide...

  13. Parametrization of nuclear parton distributions

    M Hirai; S Kumano; M Miyama

    2001-08-01

    Optimum nuclear parton distributions are obtained by analysing available experimental data on electron and muon deep inelastic scattering (DIS). The distributions are given at 2 = 1 GeV2 with a number of parameters, which are determined by a 2 analysis of the data. Valencequark distributions are relatively well determined at medium , but they are slightly dependent on the assumed parametrization form particularly at small . Although antiquark distributions are shadowed at small , their behavior is not obvious at medium from the 2 data. The gluon distributions could not be restricted well by the inclusive DIS data; however, the analysis tends to support the gluon shadowing at small . We provide analytical expressions and computer subroutines for calculating the nuclear parton distributions, so that other researchers could use them for applications to other high-energy nuclear reactions.

  14. Polynomial Learning of Distribution Families

    Belkin, Mikhail

    2010-01-01

    The question of polynomial learnability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learnability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learnability for Gaussian mixtures in high dimension with an arbitrary fixed number of components. The result on learning Gaussian mixtures relies on an analysis of distributions belonging to what we call "polynomial families" in low dimension. These families are characterized by their moments being polynomial in parameters and include almost all common probability distributions as well as their mixtures and products. Using tools from real algebraic geometry, we show that parameters of any distribution belonging to such a family can be learned in polynomial time and using a polynomial number of sample points. The r...

  15. Distributional disputes and civil conflict

    2003-01-01

    Some polities are able to use constitutionally prescribed political processes to settle distributional disputes, whereas in other polities distributional disputes result in civil conflict. Theoretical analysis reveals that the following properties help to make it possible to design a self-enforcing constitution that can settle recurring distributional disputes between social classes without civil conflict: • Neither social class has a big advantage in civil conflict. • The expected incrementa...

  16. Morgenstern type bivariate Lindley Distribution

    V S Vaidyanathan; Sharon Varghese, A

    2016-01-01

    In this paper, a bivariate Lindley distribution using Morgenstern approach is proposed which can be used for modeling bivariate life time data. Some characteristics of the distribution like moment generating function, joint moments, Pearson correlation coefficient, survival function, hazard rate function, mean residual life function, vitality function and stress-strength parameter R=Pr(Ydistribution is an increasing (decreasing) f...

  17. Moment Distributions of Phase Type

    Bladt, Mogens; Nielsen, Bo Friis

    2011-01-01

    Moment distributions of phase-type and matrix-exponential distributions are shown to remain within their respective classes. We provide a probabilistic phase-type representation for the former case and an alternative representation, with an analytically appealing form, for the latter. First order...... moment distributions are of special interest in areas like demography and economics, and we calculate explicit formulas for the Lorenz curve and Gini index used in these disciplines....

  18. Reliability in Distributed Software Applications

    Catalin Alexandru TANASIE; Sorin VINTURIS; Adrian GRIGORIVICI

    2011-01-01

    Reliability is of vital importance for distributed software application and should be ensured in all stages of the development cycle. Ensuring a high level of reliability for distributed software applications leads to competitive applications which increase the level of user satisfaction. The aim of this paper is to present techniques and methods which ensure high level of reliability. A model for estimating the reliability through risk assessment is presented. Distributed software applicatio...

  19. Brine Distribution after Vacuum Saturation

    Hedegaard, Kathrine; Andersen, Bertel Lohmann

    1999-01-01

    Experiments with the vacuum saturation method for brine in plugs of chalk showed that a homogeneous distribution of brine cannot be ensured at saturations below 20% volume. Instead of a homogeneous volume distribution the brine becomes concentrated close to the surfaces of the plugs......Experiments with the vacuum saturation method for brine in plugs of chalk showed that a homogeneous distribution of brine cannot be ensured at saturations below 20% volume. Instead of a homogeneous volume distribution the brine becomes concentrated close to the surfaces of the plugs...

  20. Reliability in Distributed Software Applications

    Catalin Alexandru TANASIE

    2011-01-01

    Full Text Available Reliability is of vital importance for distributed software application and should be ensured in all stages of the development cycle. Ensuring a high level of reliability for distributed software applications leads to competitive applications which increase the level of user satisfaction. The aim of this paper is to present techniques and methods which ensure high level of reliability. A model for estimating the reliability through risk assessment is presented. Distributed software applications are composed of multiple components spread across multiple heterogeneous platforms and partial failures are inherent. To ensure high reliability is very important that the input data for distributed application components are correct and complete.

  1. Universality of citation distributions revisited

    Waltman, Ludo; van Raan, Anthony F J

    2011-01-01

    Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268] claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are quite some exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators.

  2. Distribution system modeling and analysis

    Kersting, William H

    2002-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures.Distribution System Modeling and Analysis helps prevent those errors. It gives re

  3. The Transmuted Inverse Exponential Distribution

    Pelumi Oguntunde

    2014-12-01

    Full Text Available This article introduces a two-parameter probability model which represents another generalization of the Inverse Exponential distribution by using the quadratic rank transmuted map. The proposed model is named Transmuted Inverse Exponential (TIE distribution and its statistical properties are systematically studied. We provide explicit expressions for its moments, moment generating function, quantile function, reliability function and hazard function. We estimate the parameters of the TIE distribution using the method of maximum likelihood estimation (MLE. The hazard function of the model has an inverted bathtub shape and we propose the usefulness of the TIE distribution in modeling breast cancer and bladder cancer data sets.

  4. Theoretically Optimal Distributed Anomaly Detection

    National Aeronautics and Space Administration — A novel general framework for distributed anomaly detection with theoretical performance guarantees is proposed. Our algorithmic approach combines existing anomaly...

  5. On positivity of parton distributions

    Altarelli, Guido; Ridolfi, G; Altarelli, Guido; Forte, Stefano; Ridolfi, Giovanni

    1998-01-01

    We discuss the bounds on polarized parton distributions which follow from their definition in terms of cross section asymmetries. We spell out how the bounds obtained in the naive parton model can be derived within perturbative QCD at leading order when all quark and gluon distributions are defined in terms of suitable physical processes. We specify a convenient physical definition for the polarized and unpolarized gluon distributions in terms of Higgs production from gluon fusion. We show that these bounds are modified by subleading corrections, and we determine them up to NLO. We examine the ensuing phenomenological implications, in particular in view of the determination of the polarized gluon distribution.

  6. Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5

    Arp, J.A.; Burnett, R.A.; Carter, R.J. [and others

    1998-06-26

    The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS

  7. Remote handling of TEXTOR diagnostics using CORBA as communication architecture

    At the Forschungszentrum Juelich, an upgrade of the existing distributed system for data acquisition (DAS) at the fusion experiment TEXTOR94 is under development. DAS is currently restricted to VAX/VMS and DECNET based communications, but it is planned to add UNIX based systems, and to open the local network for an improved wide area network access for remote operations. Therefore, the DAS system is to be equipped with a suitable client/server interface, which is able to cope with the various computer platforms and operating systems involved. For this purpose, the common object request broker architecture (CORBA) will be used. CORBA is an object oriented, standardized architecture for distributed systems, which provides a high degree of modularity in software design and allows for flexible implementations. It is to act as a connecting link between the existing system and new extensions. In order to provide the desired client/server functionality for the data acquisition tasks, the components of the system (diagnostic, database, etc.) are modelled by CORBA interfaces. Processes for diagnostic control and data readout in the existing OpenVMS systems are aimed at to be accessible by CORBA server implementations. The corresponding client implementations will be developed for the operating system platforms most frequently used at TEXTOR94. Communication between clients and server will be based on TCP/IP and are to be managed by CORBA. By this standardized way, remote control of diagnostic instrumentation becomes possible in a multiplatform computer and wide area network environment. At a later stage it is intended to integrate the system into a 'virtual control room' environment, which should enable the participation of cooperating institutions in the full experimental program of TEXTOR94. (orig.)

  8. Distribution network planning method considering distributed generation for peak cutting

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  9. The distribution and quantiles of functionals of weighted empirical distributions when observations have different distributions

    Withers, C S

    2010-01-01

    This paper extends Edgeworth-Cornish-Fisher expansions for the distribution and quantiles of nonparametric estimates in two ways. Firstly it allows observations to have different distributions. Secondly it allows the observations to be weighted in a predetermined way. The use of weighted estimates has a long history including applications to regression, rank statistics and Bayes theory. However, asymptotic results have generally been only first order (the CLT and weak convergence). We give third order asymptotics for the distribution and percentiles of any smooth functional of a weighted empirical distribution, thus allowing a considerable increase in accuracy over earlier CLT results. Consider independent non-identically distributed ({\\it non-iid}) observations $X_{1n}, ..., X_{nn}$ in $R^s$. Let $\\hat{F}(x)$ be their {\\it weighted empirical distribution} with weights $w_{1n}, ..., w_{nn}$. We obtain cumulant expansions and hence Edgeworth-Cornish-Fisher expansions for $T(\\hat{F})$ for any smooth functional ...

  10. Current Distribution Mapping for PEMFCs

    A developed measurement system for current distribution mapping has enabled a new approach for operational measurements in proton exchange membrane fuel cells (PEMFCs). Currently, there are many issues with the methods to measure current distribution; some of the problems that arise are breaking up the fuel cell component and these measurements are costly. Within this field of work, there is a cost effective method and an easy technique of mapping the current distribution within a fuel cell while not disrupting reactant flow. The physical setup of this method takes a current distribution board and inserts it between an anode flow field plate and a gas diffusion layer. From this layout, the current distribution can be directly measured from the current distribution board. This novel technique can be simply applied to different fuel cell hardware. Further it also can be used in fuel cell stack by inserting multiple current distribution boards into the stack cells. The results from the current distribution measurements and the electrochemical predictions from computational fluid dynamics modeling were used to analyze water transports inside the fuel cell. This developed system can be a basis for a good understanding of optimization for fuel cell design and operation mode

  11. Easy and flexible mixture distributions

    Fosgerau, Mogens; Mabit, Stefan L.

    2013-01-01

    We propose a method to generate flexible mixture distributions that are useful for estimating models such as the mixed logit model using simulation. The method is easy to implement, yet it can approximate essentially any mixture distribution. We test it with good results in a simulation study and...

  12. Is the Fisher distribution additive?

    Man, Otakar

    2005-01-01

    Roč. 49, č. 4 (2005), s. 561-572. ISSN 0039-3169 Institutional research plan: CEZ:AV0Z3013912 Keywords : distribution on the sphere * Fischer distribution * paleomagnetism * field test Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.656, year: 2005

  13. COS NUV MAMA Fold Distribution

    Wheeler, Thomas

    2013-10-01

    The performance of the MAMA microchannel plate can be monitored using a MAMA fold analysis procedure. The fold analysis provides a measurement of the distribution of charge cloud sizes incident upon the anode giving some measure of changes in the pulse-height distribution of the MCP and, therefore, MCP gain. This proposal executes the same steps as Cycle 20 proposal 13128.

  14. Distribution services and economic growth

    Yi Jin; Zhixiong Zeng

    2006-01-01

    We analyze how the presence of distribution services affects an economy's long-run growth. We show that in an endogenous growth model, increases in the unit distribution requirement lower the economy''s balanced growth rate by reducing the proportion of aggregate employment allocated to the manufacturing sector. This contrasts with the neutrality result in the exogenous growth case.

  15. ON GENERALIZED SARMANOV BIVARIATE DISTRIBUTIONS

    , G. Jay Kerns

    2011-01-01

    A class of bivariate distributions which generalizes the Sarmanov class is introduced. This class possesses a simple analytical form and desirable dependence properties. The admissible range for association parameter for given bivariate distributions are derived and the range for correlation coefficients are also presented.

  16. Water Treatment Technology - Distribution Systems.

    Ross-Harrington, Melinda; Kincaid, G. David

    One of twelve water treatment technology units, this student manual on distribution systems provides instructional materials for six competencies. (The twelve units are designed for a continuing education training course for public water supply operators.) The competencies focus on the following areas: types of pipe for distribution systems, types…

  17. Distributed Leadership: Friend or Foe?

    Harris, Alma

    2013-01-01

    Distributed leadership is now widely known and variously enacted in schools and school systems. Distributed leadership implies a fundamental re-conceptualisation of leadership as practice and challenges conventional wisdom about the relationship between formal leadership and organisational performance. There has been much debate, speculation and…

  18. Distributed generation and distribution market diversity in Europe

    The unbundling of the electricity power system will play a key role on the deployment of distributed generation (DG) in European distribution systems evolving towards Smart Grids. The present paper firstly reviews the relevant European Union (EU) regulatory framework: specific attention is paid to the concept of unbundling of power distribution sector in Europe. Afterwards, the focus is on the current state of penetration of DG technologies in the EU Member States and the corresponding interrelations with distribution features. A comparison between the unbundling of the distribution and supply markets using econometric indicators such as the Herfindahl-Hirschmann (IHH) and the Shannon-Wiener (ISW) indices is then presented. Finally, a comparative analysis between these indices and the current level of penetration of distributed generation in most EU is shown; policy recommendations conclude the paper. - Highlights: →The EU regulatory framework and the concept of unbundling are addressed. →A comparison between the unbundling of the distribution and supply markets is shown. →The Herfindahl-Hirschmann and the Shannon-Wiener econometric indices are applied. →A comparison between the indices and the penetration level of DG in EU is presented. →A comparison between the indices and the penetration level of DG in EU is presented.

  19. DISTRIBUTION OF LOAD USING MOBILE AGENT IN DISTRIBUTED WEB SERVERS

    Vijayakumar G. Dhas

    2014-01-01

    Full Text Available The continuing growth of the World-Wide Web is placing increasing demands on popular Web servers. Many of the sites are now using distributed Web servers (i.e., groups of machines to service the increasing number of client requests, as a single server cannot handle the workload. Incoming client requests must be distributed in some fashion among the machines in the distributed Web server to improve the performance. In the existing work, reducing the high message complexity is a challenge. This study introduces a novel algorithm, which has low message complexity named Load Distribution by dynamically Fixing input to the server using Mobile agent (LDFM which distributes the incoming request, as it arrives from the client world, to avoid overloading of the distributed web servers. LDFM uses prefetch techniques to balance the load among the distributed web servers. Mobile agents are susceptible to failure. This issue is also addressed to bring reliability to the algorithm. The simulation results confirm that the proposed method is reliable. The relative improvement in throughput, comparing with the exiting methods is appreciable.

  20. Distributed generation and distribution market diversity in Europe

    Lopes Ferreira, H., E-mail: helder.ferreira@ec.europa.eu [European Commission, DG Joint Research Centre, Institute of Energy, P.O. Box 2, 1755ZG Petten (Netherlands); Costescu, A. [European Commission, DG Joint Research Centre, Institute of Energy, P.O. Box 2, 1755ZG Petten (Netherlands); ENSM.SE (Ecole Nationale Superieure des Mines de Saint Etienne), 158, cours Fauriel, 42023 Saint Etienne cedex 2 (France); L' Abbate, A. [RSE - Ricerca sul Sistema Energetico SpA, Power Systems Development Department, Via Rubattino, 54, 20134 Milan (Italy); Minnebo, P.; Fulli, G. [European Commission, DG Joint Research Centre, Institute of Energy, P.O. Box 2, 1755ZG Petten (Netherlands)

    2011-09-15

    The unbundling of the electricity power system will play a key role on the deployment of distributed generation (DG) in European distribution systems evolving towards Smart Grids. The present paper firstly reviews the relevant European Union (EU) regulatory framework: specific attention is paid to the concept of unbundling of power distribution sector in Europe. Afterwards, the focus is on the current state of penetration of DG technologies in the EU Member States and the corresponding interrelations with distribution features. A comparison between the unbundling of the distribution and supply markets using econometric indicators such as the Herfindahl-Hirschmann (I{sub HH}) and the Shannon-Wiener (I{sub SW}) indices is then presented. Finally, a comparative analysis between these indices and the current level of penetration of distributed generation in most EU is shown; policy recommendations conclude the paper. - Highlights: >The EU regulatory framework and the concept of unbundling are addressed. >A comparison between the unbundling of the distribution and supply markets is shown. >The Herfindahl-Hirschmann and the Shannon-Wiener econometric indices are applied. >A comparison between the indices and the penetration level of DG in EU is presented. >A comparison between the indices and the penetration level of DG in EU is presented.

  1. Distribution-Specific Agnostic Boosting

    Feldman, Vitaly

    2009-01-01

    We consider the problem of boosting the accuracy of weak learning algorithms in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992). Known algorithms for this problem (Ben-David et al., 2001; Gavinsky, 2002; Kalai et al., 2008) follow the same strategy as boosting algorithms in the PAC model: the weak learner is executed on the same target function but over different distributions on the domain. We demonstrate boosting algorithms for the agnostic learning framework that only modify the distribution on the labels of the points (or, equivalently, modify the target function). This allows boosting a distribution-specific weak agnostic learner to a strong agnostic learner with respect to the same distribution. When applied to the weak agnostic parity learning algorithm of Goldreich and Levin (1989) our algorithm yields a simple PAC learning algorithm for DNF and an agnostic learning algorithm for decision trees over the uniform distribution using membership queries. These results substantia...

  2. Testing Closeness of Discrete Distributions

    Batu, Tugkan; Rubinfeld, Ronitt; Smith, Warren D; White, Patrick

    2010-01-01

    Given samples from two distributions over an $n$-element set, we wish to test whether these distributions are statistically close. We present an algorithm which uses sublinear in $n$, specifically, $O(n^{2/3}\\epsilon^{-8/3}\\log n)$, independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases when the distance between the distributions is small (less than $\\max\\{\\epsilon^{4/3}n^{-1/3}/32, \\epsilon n^{-1/2}/4\\}$) or large (more than $\\epsilon$) in $\\ell_1$ distance. This result can be compared to the lower bound of $\\Omega(n^{2/3}\\epsilon^{-2/3})$ for this problem given by Valiant. Our algorithm has applications to the problem of testing whether a given Markov process is rapidly mixing. We present sublinear for several variants of this problem as well.

  3. Dose distributions around selectron applicators

    Measured and calculated dose distributions around selectron applicators, loaded with 60Co high dose rate pellets, are presented. The effect of the stopping screw, spacers, pellets themselves and the applicator wall on the dose distribution is discussed. The measured dose distribution is in almost perfect agreement with the calculated distribution in planes perpendicular to the applicator axis and containing a source. On the applicator axis directly below the applicator the measured dose amounts to about 75% of the calculated value, when only the stopping screw attenuates the beam from a pellet. When the beam is attenuated by spacers in addition to the stopping screw, the discrepancy between the calculated and measured dose may exceed 50%. Clinically relevant source geometries are also discussed. It is shown that for most regions around the applicator the method of a simple addition of dose contributions from individual point sources is an acceptable approximation for the calculation of dose distributions around the selectron applicators

  4. 2014 Distributed Wind Market Report

    Orell, A; Foster, N.

    2015-08-01

    The cover of the 2014 Distributed Wind Market Report.According to the 2014 Distributed Wind Market Report, distributed wind reached a cumulative capacity of almost 1 GW (906 MW) in the United States in 2014, reflecting nearly 74,000 wind turbines deployed across all 50 states, Puerto Rico, and the U.S. Virgin Islands. In total, 63.6 MW of new distributed wind capacity was added in 2014, representing nearly 1,700 units and $170 million in investment across 24 states. In 2014, America's distributed wind energy industry supported a growing domestic industrial base as exports from United States-based small wind turbine manufacturers accounted for nearly 80% of United States-based manufacturers' sales.

  5. Content Distribution for Telecom Carriers

    Ben Falchuk

    2006-08-01

    Full Text Available Distribution of digital content is a key revenue opportunity for telecommunications carriers. As media content moves from analog and physical media-based distribution to digital on-line distribution, a great opportunity exists for carriers to claim their role in the media value chain and grow revenue by enhancing their broadband “all you can eat” high speed Internet access offer to incorporate delivery of a variety of paid content. By offering a distributed peer to peer content delivery capability with authentication, personalization and payment functions, carriers can gain a larger portion of the revenue paid for content both within and beyond their traditional service domains. This paper describes an approach to digital content distribution that leverages existing Intelligent Network infrastructure that many carriers already possess, as well as Web Services.

  6. Reconfiguration of distribution system with distributed generation using Firefly algorithm

    Yordthong Lowvachirawat

    2015-03-01

    Full Text Available This paper presents a reconfiguration technique for distribution system with distributed generation (DG using Firefly algorithm (FA. The objective is to minimize total power losses of the system. In this paper, the technique is tested on the IEEE 33-bus radial distribution system. The result shows that the method can be applied to practical system using switching for reconfiguration. In comparison to the other techniques, it is also capable to find good solution. Therefore, it can be used as an alternative to the other reconfiguration techniques.

  7. Review on Islanding Operation of Distribution System with Distributed Generation

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2011-01-01

    The growing environmental concern and various benefits of distributed generation (DG) have resulted in significant penetration of DG in many distribution systems worldwide. One of the major expected benefits of DG is the improvement in the reliability of power supply by supplying load during power...... outage by operating in an island mode. However, there are many challenges to overcome before islanding operation of a distribution system with DG can become a viable solution in future. This paper reviews some of the major challenges with islanding operation and explores some possible solutions to...

  8. Constructions for a bivariate beta distribution

    Olkin, Ingram; Trikalinos, Thomas A.

    2014-01-01

    The beta distribution is a basic distribution serving several purposes. It is used to model data, and also, as a more flexible version of the uniform distribution, it serves as a prior distribution for a binomial probability. The bivariate beta distribution plays a similar role for two probabilities that have a bivariate binomial distribution. We provide a new multivariate distribution with beta marginal distributions, positive probability over the unit square, and correlations over the full ...

  9. Distributed generation solutions: changes and opportunities for distribution companies

    'Full text:' The rapid expansion of hydrogen based power alternatives and other significant distributed generation (DG) alternatives is changing the traditional role of the local electricity distributor. This presentation is about opportunities related to incorporating such facilities into LDC and client distribution systems. This ranges from support of large co-generation facilities, such as that under development at Mississauga's Pearson International, to the integration of output from varied new forms small-scale, wind, biomass, and fuel cell power production within local distribution networks. Mr. Chuddy will examine Enersource's present and anticipated role: (1) as a technologies company aiding in developing distribution systems that integrate and fully utilize DG technology into those models and; (2) as an LDC that continues to lead the conservation movement on several fronts, including advocacy of aggregated demand response mechanisms in Ontario's wholesale market design. (author)

  10. Protection of Distribution Systems with Distributed Energy Resources

    Bak-Jensen, Birgitte; Browne, Matthew; Calone, Roberto;

    practice may lead to a loss of significant generation where each feeder may have significant DER penetration. Also, utilities have started to investigate islanding operation of distribution systems with DER as a way to improve the reliability of the power supply to customers. This report is the result of...... that include DER (chapter 5). It features the impact of DER on distribution system protection and sums up recommended best practices for reliable island detection. The third main part offers a proactive approach to technology trends (chapter 6), under consideration of new applications of teleprotection......The usage of Distributed Energy Resources (DER) in utilities around the world is expected to increase significantly. The existing distribution systems have been generally designed for unidirectional power flow, and feeders are opened and locked out for any fault within. However, in the future this...

  11. The Impact of Connecting Distributed Generation to the Distribution System

    E. V. Mgaya

    2007-01-01

    Full Text Available This paper deals with the general problem of utilizing of renewable energy sources to generate electric energy. Recent advances in renewable energy power generation technologies, e.g., wind and photovoltaic (PV technologies, have led to increased interest in the application of these generation devices as distributed generation (DG units. This paper presents the results of an investigation into possible improvements in the system voltage profile and reduction of system losses when adding wind power DG (wind-DG to a distribution system. Simulation results are given for a case study, and these show that properly sized wind DGs, placed at carefully selected sites near key distribution substations, could be very effective in improving the distribution system voltage profile and reducing power losses, and hence could  improve the effective capacity of the system. 

  12. Natural Gas Distribution Regulation Natural Gas Distribution Regulation

    Fernando Salas

    1995-03-01

    Full Text Available This document discusses the economic content of a set of Ruling affecting the provision of natural gas distribution services in Mexico. As such, it describes the mechanisms proposed in order to ensure economic efficiency in the undertaking of such activity, i.e., competition policies, rate regulation, delimination of licensed geographic regions and design of auction procedures for the granting of distribution franchises. This document discusses the economic content of a set of Ruling affecting the provision of natural gas distribution services in Mexico. As such, it describes the mechanisms proposed in order to ensure economic efficiency in the undertaking of such activity, i.e., competition policies, rate regulation, delimination of licensed geographic regions and design of auction procedures for the granting of distribution franchises.

  13. Islanding Operation of Distribution System with Distributed Generations

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2010-01-01

    The growing interest in distributed generations (DGs) due to environmental concern and various other reasons have resulted in significant penetration of DGs in many distribution system worldwide. DGs come with many benefits. One of the benefits is improved reliability by supplying load during power...... outage by operating in island mode. However, there are many challenges to overcome before islanding can become a viable solution in future. This paper point outs some of the major challenges with island operation and suggests some possible solutions....

  14. Nuclear effects on valence quark distributions and sea quark distributions

    A method is presented to get nuclear effect functions RvA(xt) and Rsa(xt) on valence quark distributions and sea quark distributions from the data of 1-A DIS process and nuclear Drell-Yan process. Both the functions may be used to test the theoretical models explaining the nuclear effects. As a example, RvFe(xt) and RsFe(xt) of the iron nucleus were obtained by this method

  15. Power Generation and Distribution via Distributed Coordination Control

    Kim, Byeong-Yeon; Oh, Kwang-Kyo; Ahn, Hyo-Sung

    2014-01-01

    This paper presents power coordination, power generation, and power flow control schemes for supply-demand balance in distributed grid networks. Consensus schemes using only local information are employed to generate power coordination, power generation and power flow control signals. For the supply-demand balance, it is required to determine the amount of power needed at each distributed power node. Also due to the different power generation capacities of each power node, coordination of pow...

  16. Natural Gas Distribution Regulation Natural Gas Distribution Regulation

    Fernando Salas; Benjamín Contreras

    1995-01-01

    This document discusses the economic content of a set of Ruling affecting the provision of natural gas distribution services in Mexico. As such, it describes the mechanisms proposed in order to ensure economic efficiency in the undertaking of such activity, i.e., competition policies, rate regulation, delimination of licensed geographic regions and design of auction procedures for the granting of distribution franchises. This document discusses the economic content of a set of Ruling affectin...

  17. Optimal power flow for distribution networks with distributed generation

    Radosavljević Jordan; Jevtić Miroljub; Klimenta Dardan; Arsić Nebojša

    2015-01-01

    This paper presents a genetic algorithm (GA) based approach for the solution of the optimal power flow (OPF) in distribution networks with distributed generation (DG) units, including fuel cells, micro turbines, diesel generators, photovoltaic systems and wind turbines. The OPF is formulated as a nonlinear multi-objective optimization problem with equality and inequality constraints. Due to the stochastic nature of energy produced from renewable sources, i....

  18. Distribution planning with reliability options for distributed generation

    The promotion of electricity generation from renewable energy sources (RES) and combined heat and power (CHP) has resulted in increasing penetration levels of distributed generation (DG). However, large-scale connection of DG involves profound changes in the operation and planning of electricity distribution networks. Distribution System Operators (DSOs) play a key role since these agents have to provide flexibility to their networks in order to integrate DG. Article 14.7 of EU Electricity Directive states that DSOs should consider DG as an alternative to new network investments. This is a challenging task, particularly under the current regulatory framework where DSOs must be legally and functionally unbundled from other activities in the electricity sector. This paper proposes a market mechanism, referred to as reliability options for distributed generation (RODG), which provides DSOs with an alternative to the investment in new distribution facilities. The mechanism proposed allocates the firm capacity required to DG embedded in the distribution network through a competitive auction. Additionally, RODG make DG partly responsible for reliability and provide DG with incentives for a more efficient operation taking into account the network conditions. (author)

  19. Aerosol Size Distributions In Auckland.

    Coulson, G.; Olivares, G.; Talbot, Nicholas

    2016-01-01

    Roč. 50, č. 1 (2016), s. 23-28. E-ISSN 1836-5876 Institutional support: RVO:67985858 Keywords : aerosol size distribution * particle number concentration * roadside Subject RIV: CF - Physical ; Theoretical Chemistry

  20. ASYMPTOTIC QUANTIZATION OF PROBABILITY DISTRIBUTIONS

    Klaus P(o)tzelberger

    2003-01-01

    We give a brief introduction to results on the asymptotics of quantization errors.The topics discussed include the quantization dimension,asymptotic distributions of sets of prototypes,asymptotically optimal quantizations,approximations and random quantizations.