WorldWideScience

Sample records for distributed computer control

  1. Distributed computer control system for reactor optimization

    International Nuclear Information System (INIS)

    Williams, A.H.

    1983-01-01

    At the Oldbury power station a prototype distributed computer control system has been installed. This system is designed to support research and development into improved reactor temperature control methods. This work will lead to the development and demonstration of new optimal control systems for improvement of plant efficiency and increase of generated output. The system can collect plant data from special test instrumentation connected to dedicated scanners and from the station's existing data processing system. The system can also, via distributed microprocessor-based interface units, make adjustments to the desired reactor channel gas exit temperatures. The existing control equipment will then adjust the height of control rods to maintain operation at these temperatures. The design of the distributed system is based on extensive experience with distributed systems for direct digital control, operator display and plant monitoring. The paper describes various aspects of this system, with particular emphasis on: (1) the hierarchal system structure; (2) the modular construction of the system to facilitate installation, commissioning and testing, and to reduce maintenance to module replacement; (3) the integration of the system into the station's existing data processing system; (4) distributed microprocessor-based interfaces to the reactor controls, with extensive security facilities implemented by hardware and software; (5) data transfer using point-to-point and bussed data links; (6) man-machine communication based on VDUs with computer input push-buttons and touch-sensitive screens; and (7) the use of a software system supporting a high-level engineer-orientated programming language, at all levels in the system, together with comprehensive data link management

  2. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  3. Distributed computer controls for accelerator systems

    Science.gov (United States)

    Moore, T. L.

    1989-04-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed.

  4. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1989-01-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. (orig.)

  5. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  6. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  7. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  8. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    International Nuclear Information System (INIS)

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software

  9. Distributed computer control systems in future nuclear power plants

    International Nuclear Information System (INIS)

    Yan, G.; L'Archeveque, J.V.R.; Watkins, L.M.

    1978-09-01

    Good operating experience with computer control in CANDU reactors over the last decade justifies a broadening of the role of digital electronic and computer related technologies in future plants. Functions of electronic systems in the total plant context are reappraised to help evolve an appropriate match between technology and future applications. The systems research, development and demonstration program at CRNL is described, focusing on the projects pertinent to the real-time data acquisition and process control requirements. (author)

  10. Modern computer networks and distributed intelligence in accelerator controls

    International Nuclear Information System (INIS)

    Briegel, C.

    1991-01-01

    Appropriate hardware and software network protocols are surveyed for accelerator control environments. Accelerator controls network topologies are discussed with respect to the following criteria: vertical versus horizontal and distributed versus centralized. Decision-making considerations are provided for accelerator network architecture specification. Current trends and implementations at Fermilab are discussed

  11. The design development and commissioning of two distributed computer based boiler control systems

    International Nuclear Information System (INIS)

    Collier, D.; Johnstone, L.R.; Pringle, S.T.; Walker, R.W.

    1980-01-01

    The CEBG N.E. Region has recently commissioned two major boiler control schemes using distributed computer control system. Both systems have considerable development potential to allow modifications to meet changing operational requirements. The distributed approach to control was chosen in both instances so as to achieve high control system availability and as a method of easing the commissioning programs. The experience gained with these two projects has reinforced the view that distributed computer systems show advantages over centralised single computers especially if software is designed for the distributed system. (auth)

  12. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  13. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  14. Distributed computation of supremal conditionally-controllable sublanguages

    Czech Academy of Sciences Publication Activity Database

    Komenda, Jan; Masopust, Tomáš

    2016-01-01

    Roč. 89, č. 2 (2016), s. 424-436 ISSN 0020-7179 R&D Projects: GA ČR GA15-02532S; GA MŠk LH13012 Institutional support: RVO:67985840 Keywords : discrete-event systems * supervisory control * coordination control Subject RIV: BA - General Mathematics Impact factor: 2.208, year: 2016 http://www.tandfonline.com/doi/full/10.1080/00207179.2015.1079736

  15. A computed torque method based attitude control with optimal force distribution for articulated body mobile robots

    International Nuclear Information System (INIS)

    Fukushima, Edwardo F.; Hirose, Shigeo

    2000-01-01

    This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)

  16. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  17. The Overview of the National Ignition Facility Distributed Computer Control System

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Carey, R.A.; Estes, C.M.; Fisher, J.M.; Krammen, J.E.; Reed, R.K.; VanArsdall, P.J.; Woodruff, J.P.

    2001-01-01

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008

  18. Coping with distributed computing

    International Nuclear Information System (INIS)

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given

  19. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  20. A distributed, graphical user interface based, computer control system for atomic physics experiments.

    Science.gov (United States)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  1. A distributed, graphical user interface based, computer control system for atomic physics experiments

    Science.gov (United States)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  2. Intelligent distributed computing

    CERN Document Server

    Thampi, Sabu

    2015-01-01

    This book contains a selection of refereed and revised papers of the Intelligent Distributed Computing Track originally presented at the third International Symposium on Intelligent Informatics (ISI-2014), September 24-27, 2014, Delhi, India.  The papers selected for this Track cover several Distributed Computing and related topics including Peer-to-Peer Networks, Cloud Computing, Mobile Clouds, Wireless Sensor Networks, and their applications.

  3. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  4. Final Report: MaRSPlus Sensor System Electrical Cable Management and Distributed Motor Control Computer Interface

    Science.gov (United States)

    Reil, Robin

    2011-01-01

    The success of JPL's Next Generation Imaging Spectrometer (NGIS) in Earth remote sensing has inspired a follow-on instrument project, the MaRSPlus Sensor System (MSS). One of JPL's responsibilities in the MSS project involves updating the documentation from the previous JPL airborne imagers to provide all the information necessary for an outside customer to operate the instrument independently. As part of this documentation update, I created detailed electrical cabling diagrams to provide JPL technicians with clear and concise build instructions and a database to track the status of cables from order to build to delivery. Simultaneously, a distributed motor control system is being developed for potential use on the proposed 2018 Mars rover mission. This system would significantly reduce the mass necessary for rover motor control, making more mass space available to other important spacecraft systems. The current stage of the project consists of a desktop computer talking to a single "cold box" unit containing the electronics to drive a motor. In order to test the electronics, I developed a graphical user interface (GUI) using MATLAB to allow a user to send simple commands to the cold box and display the responses received in a user-friendly format.

  5. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  6. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    Science.gov (United States)

    Sanders, Adam M. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor); Strawser, Philip A. (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  7. File management for experiment control parameters within a distributed function computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-10-01

    An attempt to design and implement a computer system for control of and data collection from a set of laboratory experiments reveals that many of the experiments in the set require an extensive collection of parameters for their control. The operation of the experiments can be greatly simplified if a means can be found for storing these parameters between experiments and automatically accessing them as they are required. A subsystem for managing files of such experiment control parameters is discussed. 3 figures

  8. DIRAC distributed computing services

    International Nuclear Information System (INIS)

    Tsaregorodtsev, A

    2014-01-01

    DIRAC Project provides a general-purpose framework for building distributed computing systems. It is used now in several HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple tool like DIRAC for accessing grid and other types of distributed computing resources. However, small experiments cannot afford to install and maintain dedicated services. Therefore, several grid infrastructure projects are providing DIRAC services for their respective user communities. These services are used for user tutorials as well as to help porting the applications to the grid for a practical day-to-day work. The services are giving access typically to several grid infrastructures as well as to standalone computing clusters accessible by the target user communities. In the paper we will present the experience of running DIRAC services provided by the France-Grilles NGI and other national grid infrastructure projects.

  9. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  10. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  11. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.R.; White, R.C.

    1994-01-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory (SSCL). In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  12. System of operative computer control of power distribution fields in the Beloyarsk nuclear power plant

    International Nuclear Information System (INIS)

    Kulikov, N.Ya.; Snitko, Eh.I.; Rasputnis, A.M.; Solodov, V.P.

    1976-01-01

    Describes the system of intrareactor control over the reactors of the Byeloyarskaya Atomic Station. In the second block of the station, use is made of direct charge emission detectors installed in the central apertures of the superheater channels and operating reliably at temperatures up to 750 deg C. The detectors of the first and the second block are connected to the computer which sends the results of processing the signals to the printer, while the signals for deviations go to the mnemonic tablaux of the reactors. The good working order of the detectors is checked by comparison with zero as well as with the mean detector current for the reactor concerned. The application of the intrareactor control system has allowed the stable thermal power to be increased from 480-500 to 530 Mw and makes it possible to control and maintain the neutron field formed with a relative error of 3-4%. The structural scheme of the system of intrareactor control is given

  13. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    International Nuclear Information System (INIS)

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-01-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth

  14. Coordination control of distributed systems

    CERN Document Server

    Villa, Tiziano

    2015-01-01

    This book describes how control of distributed systems can be advanced by an integration of control, communication, and computation. The global control objectives are met by judicious combinations of local and nonlocal observations taking advantage of various forms of communication exchanges between distributed controllers. Control architectures are considered according to  increasing degrees of cooperation of local controllers:  fully distributed or decentralized controlcontrol with communication between controllers,  coordination control, and multilevel control.  The book covers also topics bridging computer science, communication, and control, like communication for control of networks, average consensus for distributed systems, and modeling and verification of discrete and of hybrid systems. Examples and case studies are introduced in the first part of the text and developed throughout the book. They include: control of underwater vehicles, automated-guided vehicles on a container terminal, contro...

  15. Distributed computing for global health

    CERN Multimedia

    CERN. Geneva; Schwede, Torsten; Moore, Celia; Smith, Thomas E; Williams, Brian; Grey, François

    2005-01-01

    Distributed computing harnesses the power of thousands of computers within organisations or over the Internet. In order to tackle global health problems, several groups of researchers have begun to use this approach to exceed by far the computing power of a single lab. This event illustrates how companies, research institutes and the general public are contributing their computing power to these efforts, and what impact this may have on a range of world health issues. Grids for neglected diseases Vincent Breton, CNRS/EGEE This talk introduces the topic of distributed computing, explaining the similarities and differences between Grid computing, volunteer computing and supercomputing, and outlines the potential of Grid computing for tackling neglected diseases where there is little economic incentive for private R&D efforts. Recent results on malaria drug design using the Grid infrastructure of the EU-funded EGEE project, which is coordinated by CERN and involves 70 partners in Europe, the US and Russi...

  16. Distributed Sensing, Computing, and Actuation Architecture for Aeroservoelastic Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal introduces an approach to aeroservoelastic control that provides enhanced robustness to unmodeled dynamics. The core of the approach is a processing...

  17. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  18. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  19. Distributed computing for macromolecular crystallography.

    Science.gov (United States)

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  20. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  1. Distributed Control Diffusion

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2007-01-01

    . Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular....... This approach allows the programmer to dynamically distribute behaviors throughout a robot and moreover provides a partial abstraction over the concrete physical shape of the robot. We have implemented a prototype of a distributed control diffusion system for the ATRON modular, self-reconfigurable robot......, self-reconfigurable robots, we present the concept of distributed control diffusion: distributed queries are used to identify modules that play a specific role in the robot, and behaviors that implement specific control strategies are diffused throughout the robot based on these role assignments...

  2. Computer control system of TRISTAN

    International Nuclear Information System (INIS)

    Kurokawa, Shin-ichi; Shinomoto, Manabu; Kurihara, Michio; Sakai, Hiroshi.

    1984-01-01

    For the operation of a large accelerator, it is necessary to connect an enormous quantity of electro-magnets, power sources, vacuum equipment, high frequency accelerator and so on and to control them harmoniously. For the purpose, a number of computers are adopted, and connected with a network, in this way, a large computer system for laboratory automation which integrates and controls the whole system is constructed. As a distributed system of large scale, the functions such as electro-magnet control, file processing and operation control are assigned to respective computers, and the total control is made feasible by network connection, at the same time, as the interface with controlled equipment, the CAMAC (computer-aided measurement and control) is adopted to ensure the flexibility and the possibility of expansion of the system. Moreover, the language ''NODAL'' having network support function was developed so as to easily make software without considering the composition of more complex distributed system. The accelerator in the TRISTAN project is composed of an electron linear accelerator, an accumulation ring of 6 GeV and a main ring of 30 GeV. Two ring type accelerators must be synchronously operated as one body, and are controlled with one computer system. The hardware and software are outlined. (Kako, I.)

  3. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  4. Design concepts and experience in the application of distributed computing to the control of large CEGB power plant

    International Nuclear Information System (INIS)

    Wallace, J.N.

    1980-01-01

    With the ever increasing price of fossil fuels it became obvious during the 1970's that Pembroke Power Station (4 x 500MW oil fired) and Didcot Power Station (4 x 500MW coal fired) were going to operate flexibly with many units two-shifting frequently. The region was also expecting to refurbish nuclear plant in the 1980's. Based on previous experience with mini-computers, the region initiated a research/development programme aimed at refitting Pembroke and Didcot using distrubuted computer techniques that were also broadly applicable to nuclear plant. Major schemes have now been implemented at Pembroke and Didcot for plant condition monitoring, control and display. All computers on two units at each station are now functional with a third unit currently being set to work. This paper aims to outline the generic technical aspects of these schemes, describe the implementation strategy adopted and develop some thoughts on nuclear power plant applications. (auth)

  5. Distributed Decision Making and Control

    CERN Document Server

    Rantzer, Anders

    2012-01-01

    Distributed Decision Making and Control is a mathematical treatment of relevant problems in distributed control, decision and multiagent systems, The research reported was prompted by the recent rapid development in large-scale networked and embedded systems and communications. One of the main reasons for the growing complexity in such systems is the dynamics introduced by computation and communication delays. Reliability, predictability, and efficient utilization of processing power and network resources are central issues and the new theory and design methods presented here are needed to analyze and optimize the complex interactions that arise between controllers, plants and networks. The text also helps to meet requirements arising from industrial practice for a more systematic approach to the design of distributed control structures and corresponding information interfaces Theory for coordination of many different control units is closely related to economics and game theory network uses being dictated by...

  6. Overlapping clusters for distributed computation.

    Energy Technology Data Exchange (ETDEWEB)

    Mirrokni, Vahab (Google Research, New York, NY); Andersen, Reid (Microsoft Corporation, Redmond, WA); Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  7. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.

    2010-01-01

    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  8. Leakage Reduction in Water Distribution Systems with Efficient Placement and Control of Pressure Reducing Valves Using Soft Computing Techniques

    Directory of Open Access Journals (Sweden)

    A. Gupta

    2017-04-01

    Full Text Available Reduction of leakages in a water distribution system (WDS is one of the major concerns of water industries. Leakages depend on pressure, hence installing pressure reducing valves (PRVs in the water network is a successful techniques for reducing leakages. Determining the number of valves, their locations, and optimal control setting are the challenges faced. This paper presents a new algorithm-based rule for determining the location of valves in a WDS having a variable demand pattern, which results in more favorable optimization of PRV localization than that caused by previous techniques. A multiobjective genetic algorithm (NSGA-II was used to determine the optimized control value of PRVs and to minimize the leakage rate in the WDS. Minimum required pressure was maintained at all nodes to avoid pressure deficiency at any node. Proposed methodology is applied in a benchmark WDS and after using PRVs, the average leakage rate was reduced by 6.05 l/s (20.64%, which is more favorable than the rate obtained with the existing techniques used for leakage control in the WDS. Compared with earlier studies, a lower number of PRVs was required for optimization, thus the proposed algorithm tends to provide a more cost-effective solution. In conclusion, the proposed algorithm leads to more favorable optimized localization and control of PRV with improved leakage reduction rate.

  9. Computer-controlled attenuator.

    Science.gov (United States)

    Mitov, D; Grozev, Z

    1991-01-01

    Various possibilities for applying electronic computer-controlled attenuators for the automation of physiological experiments are considered. A detailed description is given of the design of a 4-channel computer-controlled attenuator, in two of the channels of which the output signal can change by a linear step, in the other two channels--by a logarithmic step. This, together with the existence of additional programmable timers, allows to automate a wide range of studies in different spheres of physiology and psychophysics, including vision and hearing.

  10. Computer Graphics Simulations of Sampling Distributions.

    Science.gov (United States)

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  11. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  12. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  13. Wireless infrared computer control

    Science.gov (United States)

    Chen, George C.; He, Xiaofei

    2004-04-01

    Wireless mouse is not restricted by cable"s length and has advantage over its wired counterpart. However, all the mice available in the market have detection range less than 2 meters and angular coverage less than 180 degrees. Furthermore, commercial infrared mice are based on track ball and rollers to detect movements. This restricts them to be used in those occasions where users want to have dynamic movement, such as presentations and meetings etc. This paper presents our newly developed infrared wireless mouse, which has a detection range of 6 meters and angular coverage of 180 degrees. This new mouse uses buttons instead of traditional track ball and is developed to be a hand-held device like remote controller. It enables users to control cursor with a distance closed to computer and the mouse to be free from computer operation.

  14. Propulsion controlled aircraft computer

    Science.gov (United States)

    Cogan, Bruce R. (Inventor)

    2010-01-01

    A low-cost, easily retrofit Propulsion Controlled Aircraft (PCA) system for use on a wide range of commercial and military aircraft consists of an propulsion controlled aircraft computer that reads in aircraft data including aircraft state, pilot commands and other related data, calculates aircraft throttle position for a given maneuver commanded by the pilot, and then displays both current and calculated throttle position on a cockpit display to show the pilot where to move throttles to achieve the commanded maneuver, or is automatically sent digitally to command the engines directly.

  15. Distributed Processing in Cloud Computing

    OpenAIRE

    Mavridis, Ilias; Karatza, Eleni

    2016-01-01

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016. Cloud computing offers a wide range of resources and services through the Internet that can been used for various purposes. The rapid growth of cloud computing has exempted many companies and institutions from the burden of maintaining expensive hardware and software infrastructure. With characteristics like high scalability, availability ...

  16. The distribution of cerebral muscarinic acetylcholine receptors in vivo in patients with dementia. A controlled study with 123IQNB and single photon emission computed tomography

    International Nuclear Information System (INIS)

    Weinberger, D.R.; Gibson, R.; Coppola, R.; Jones, D.W.; Molchan, S.; Sunderland, T.; Berman, K.F.; Reba, R.C.

    1991-01-01

    A high-affinity muscarinic receptor antagonist, 123IQNB (3-quinuclidinyl-4-iodobenzilate labeled with iodine 123), was used with single photon emission computed tomography to image muscarinic acetylcholine receptors in 14 patients with dementia and in 11 healthy controls. High-resolution single photon emission computed tomographic scanning was performed 21 hours after the intravenous administration of approximately 5 mCi of IQNB. In normal subjects, the images of retained ligand showed a consistent regional pattern that correlated with postmortem studies of the relative distribution of muscarinic receptors in the normal human brain, having high radioactivity counts in the basal ganglia, occipital cortex, and insular cortex, low counts in the thalamus, and virtually no counts in the cerebellum. Eight of 12 patients with a clinical diagnosis of Alzheimer's disease had obvious focal cortical defects in either frontal or posterior temporal cortex. Both patients with a clinical diagnosis of Pick's disease had obvious frontal and anterior temporal defects. A region of interest statistical analysis of relative regional activity revealed a significant reduction bilaterally in the posterior temporal cortex of the patients with Alzheimer's disease compared with controls. This study demonstrates the practicability of acetylcholine receptor imaging with 123IQNB and single photon emission computed tomography. The data suggest that focal abnormalities in muscarinic binding in vivo may characterize some patients with Alzheimer's disease and Pick's disease, but further studies are needed to address questions about partial volume artifacts and receptor quantification

  17. PEP computer control system

    International Nuclear Information System (INIS)

    1979-03-01

    This paper describes the design and performance of the computer system that will be used to control and monitor the PEP storage ring. Since the design is essentially complete and much of the system is operational, the system is described as it is expected to 1979. Section 1 of the paper describes the system hardware which includes the computer network, the CAMAC data I/O system, and the operator control consoles. Section 2 describes a collection of routines that provide general services to applications programs. These services include a graphics package, data base and data I/O programs, and a director programm for use in operator communication. Section 3 describes a collection of automatic and semi-automatic control programs, known as SCORE, that contain mathematical models of the ring lattice and are used to determine in real-time stable paths for changing beam configuration and energy and for orbit correction. Section 4 describes a collection of programs, known as CALI, that are used for calibration of ring elements

  18. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  19. Distributed control system for the FMIT

    International Nuclear Information System (INIS)

    Johnson, J.A.; Machen, D.R.; Suyama, R.M.

    1979-01-01

    The control system for the Fusion Materials Irradiation Test (FMIT) Facility will provide the primary data acquisition, control, and interface components that integrate all of the individual FMIT systems into a functional facility. The control system consists of a distributed computer network, control consoles and instrumentation subsystems. The FMIT Facility will be started, operated and secured from a Central Control Room. All FMIT systems and experimental functions will be monitored from the Central Control Room. The data acquisition and control signals will be handled by a data communications network, which connects dual computers in the Central Control Room to the microcomputers in CAMAC crates near the various subsystems of the facility

  20. Computer aided control engineering

    DEFF Research Database (Denmark)

    Szymkat, Maciej; Ravn, Ole

    1997-01-01

    Current developments in the field of Computer Aided Control Engineering (CACE) have a visible impact on the design methodologies and the structure of the software tools supporting them. Today control engineers has at their disposal libraries, packages or programming environments that may...... in CACE enhancing efficient flow of information between the tools supporting the following phases of the design process. In principle, this flow has to be two-way, and more or less automated, in order to enable the engineer to observe the propagation of the particular design decisions taken at various...... levels.The major conclusions of the paper are related with identifying the factors affecting the software tool integration in a way needed to facilitate design "inter-phase" communication. These are: standard application interfaces, dynamic data exchange mechanisms, code generation techniques and general...

  1. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  2. Fel simulations using distributed computing

    NARCIS (Netherlands)

    Einstein, J.; Biedron, S.G.; Freund, H.P.; Milton, S.V.; Van Der Slot, P. J M; Bernabeu, G.

    2016-01-01

    While simulation tools are available and have been used regularly for simulating light sources, including Free-Electron Lasers, the increasing availability and lower cost of accelerated computing opens up new opportunities. This paper highlights a method of how accelerating and parallelizing code

  3. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  4. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  5. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  6. Computer control applied to accelerators

    CERN Document Server

    Crowley-Milling, Michael C

    1974-01-01

    The differences that exist between control systems for accelerators and other types of control systems are outlined. It is further indicated that earlier accelerators had manual control systems to which computers were added, but that it is essential for the new, large accelerators to include computers in the control systems right from the beginning. Details of the computer control designed for the Super Proton Synchrotron are presented. The method of choosing the computers is described, as well as the reasons for CERN having to design the message transfer system. The items discussed include: CAMAC interface systems, a new multiplex system, operator-to-computer interaction (such as touch screen, computer-controlled knob, and non- linear track-ball), and high-level control languages. Brief mention is made of the contributions of other high-energy research laboratories as well as of some other computer control applications at CERN. (0 refs).

  7. Impossibility results for distributed computing

    CERN Document Server

    Attiya, Hagit

    2014-01-01

    To understand the power of distributed systems, it is necessary to understand their inherent limitations: what problems cannot be solved in particular systems, or without sufficient resources (such as time or space). This book presents key techniques for proving such impossibility results and applies them to a variety of different problems in a variety of different system models. Insights gained from these results are highlighted, aspects of a problem that make it difficult are isolated, features of an architecture that make it inadequate for solving certain problems efficiently are identified

  8. LHCb: LHCb Distributed Computing Operations

    CERN Multimedia

    Stagni, F

    2011-01-01

    The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and independent sources of information offering different views for a better understanding of problems while an operations team and defined procedures have been put in place to handle them. This work summarizes the state-of-the-art of LHCb Grid operations emphasizing the reasons that brought to various choices and what are the tools currently in use to run our daily activities. We highlight the most common problems experienced across years of activities on the WLCG infrastructure, the services with their criticality, the procedures in place, the relevant metrics and the tools available and the ones still missing.

  9. Distributed computing by oblivious mobile robots

    CERN Document Server

    Flocchini, Paola; Santoro, Nicola

    2012-01-01

    The study of what can be computed by a team of autonomous mobile robots, originally started in robotics and AI, has become increasingly popular in theoretical computer science (especially in distributed computing), where it is now an integral part of the investigations on computability by mobile entities. The robots are identical computational entities located and able to move in a spatial universe; they operate without explicit communication and are usually unable to remember the past; they are extremely simple, with limited resources, and individually quite weak. However, collectively the ro

  10. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  11. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  12. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  13. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  14. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  15. A Software Rejuvenation Framework for Distributed Computing

    Science.gov (United States)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  16. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  17. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  18. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  19. General distributed control system for fusion experiments

    International Nuclear Information System (INIS)

    Klingner, P.L.; Levings, S.J.; Wilkins, R.W.

    1986-01-01

    A general control system using distributed LSI-11 microprocessors is being developed. Common software residues in each LSI-11 and is tailored to an application by control specifications downloaded from a host computer. The microprocessors, their control interfaces, and the micro-to-host communications are CAMAC based. The host computer also supports an operator interface, coordination of multiple microprocessors, and utilities to create and maintain the control specifications. Typical applications include monitoring safety interlocks as well as controlling vacuum systems, high voltage charging systems, and diagnostics

  20. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  1. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  2. Hierarchically structured distributed microprocessor network for control

    International Nuclear Information System (INIS)

    Greenwood, J.R.; Holloway, F.W.; Rupert, P.R.; Ozarski, R.G.; Suski, G.J.

    1979-01-01

    To satisfy a broad range of control-analysis and data-acquisition requirements for Shiva, a hierarchical, computer-based, modular-distributed control system was designed. This system handles the more than 3000 control elements and 1000 data acquisition units in a severe high-voltage, high-current environment. The control system design gives one a flexible and reliable configuration to meet the development milestones for Shiva within critical time limits

  3. Distributed Power Flow Controller

    NARCIS (Netherlands)

    Yuan, Z.

    2010-01-01

    In modern power systems, there is a great demand to control the power flow actively. Power flow controlling devices (PFCDs) are required for such purpose, because the power flow over the lines is the nature result of the impedance of each line. Due to the control capabilities of different types of

  4. Arcade: A Web-Java Based Framework for Distributed Computing

    Science.gov (United States)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  5. Distributed computing environment for Mine Warfare Command

    OpenAIRE

    Pritchard, Lane L.

    1993-01-01

    Approved for public release; distribution is unlimited. The Mine Warfare Command in Charleston, South Carolina has been converting its information systems architecture from a centralized mainframe based system to a decentralized network of personal computers over the past several years. This thesis analyzes the progress Of the evolution as of May of 1992. The building blocks of a distributed architecture are discussed in relation to the choices the Mine Warfare Command has made to date. Ar...

  6. ATLAS distributed computing: experience and evolution

    International Nuclear Information System (INIS)

    Nairz, A

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb −1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future

  7. Intelligent Distributed Computing VI : Proceedings of the 6th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Badica, Costin; Malgeri, Michele; Unland, Rainer

    2013-01-01

    This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trus...

  8. Distributed Controllers for Norm Enforcement

    NARCIS (Netherlands)

    Testerink, B.J.G.; Dastani, M.M.; Bulling, Nils

    2016-01-01

    This paper focuses on computational mechanisms that control the behavior of autonomous systems at runtime without necessarily restricting their autonomy. We build on existing approaches from runtime verification, control automata, and norm-based systems, and define norm-based controllers that

  9. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  10. Distributed expert systems for nuclear reactor control

    International Nuclear Information System (INIS)

    Otaduy, P.J.

    1992-01-01

    A network of distributed expert systems is the heart of a prototype supervisory control architecture developed at the Oak Ridge National Laboratory (ORNL) for an advanced multimodular reactor. Eight expert systems encode knowledge on signal acquisition, diagnostics, safeguards, and control strategies in a hybrid rule-based, multiprocessing and object-oriented distributed computing environment. An interactive simulation of a power block consisting of three reactors and one turbine provides a realistic, testbed for performance analysis of the integrated control system in real-time. Implementation details and representative reactor transients are discussed

  11. Research computing in a distributed cloud environment

    International Nuclear Information System (INIS)

    Fransham, K; Agarwal, A; Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J

    2010-01-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  12. Design of a distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    Bilous, O [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    A digital computer is used to evaluate various pressure control systems for a gaseous diffusion cascade. This is an example of a distributed feedback control system. The paper gives a brief discussion of similar cases of distributed or stage wise control systems, which may occur in multiple temperature control of chemical processes. (author) [French] Une calculatrice digitale est utilisee pour evaluer divers systemes de controle de pression pour une cascade de diffusion gazeuse. C'est un exemple de systeme de controle a reaction distribue. Le rapport presente une breve discussion de cas semblables de systemes de controle distribues ou en etage, qui peuvent se presenter dans de nombreux controles de temperature de reactions chimiques. (auteur)

  13. Present SLAC accelerator computer control system features

    International Nuclear Information System (INIS)

    Davidson, V.; Johnson, R.

    1981-02-01

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  14. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  15. Operation of the ATLAS distributed computing

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    We describe the central operation of the ATLAS distributed computing system. The majority of compute intensive activities within ATLAS are carried out on some 350,000 CPU cores on the Grid, augmented by opportunistic usage of significant HPC and volunteer resources. The increasing scale, and challenging new payloads, demand fine-tuning of operational procedures together with timely developments of the production system. We describe several such developments, motivated directly from operational experience. Optimization of inefficient task requests, from both official production and users, is made possible by automatic detection of payload properties. User education, job shaping or preventative throttling help to increase the overall throughput of the available resources.

  16. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  17. Decentralized Resource Management in Distributed Computer Systems.

    Science.gov (United States)

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  18. ATLAS Distributed Computing: Its Central Services core

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration

    2018-01-01

    The ATLAS Distributed Computing (ADC) Project is responsible for the off-line processing of data produced by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It facilitates data and workload management for ATLAS computing on the Worldwide LHC Computing Grid (WLCG). ADC Central Services operations (CSops)is a vital part of ADC, responsible for the deployment and configuration of services needed by ATLAS computing and operation of those services on CERN IT infrastructure, providing knowledge of CERN IT services to ATLAS service managers and developers, and supporting them in case of issues. Currently this entails the management of thirty seven different OpenStack projects, with more than five thousand cores allocated for these virtual machines, as well as overseeing the distribution of twenty nine petabytes of storage space in EOS for ATLAS. As the LHC begins to get ready for the next long shut-down, which will bring in many new upgrades to allow for more data to be captured by the on-line syste...

  19. Distributed systems status and control

    Science.gov (United States)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  20. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  1. Integrated Transmission and Distribution Control

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fuller, Jason C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lian, Jianming [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Wei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Marinovici, Laurentiu D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fisher, Andrew R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chassin, Forrest S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hauer, Matthew L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-01-01

    Distributed, generation, demand response, distributed storage, smart appliances, electric vehicles and renewable energy resources are expected to play a key part in the transformation of the American power system. Control, coordination and compensation of these smart grid assets are inherently interlinked. Advanced control strategies to warrant large-scale penetration of distributed smart grid assets do not currently exist. While many of the smart grid technologies proposed involve assets being deployed at the distribution level, most of the significant benefits accrue at the transmission level. The development of advanced smart grid simulation tools, such as GridLAB-D, has led to a dramatic improvement in the models of smart grid assets available for design and evaluation of smart grid technology. However, one of the main challenges to quantifying the benefits of smart grid assets at the transmission level is the lack of tools and framework for integrating transmission and distribution technologies into a single simulation environment. Furthermore, given the size and complexity of the distribution system, it is crucial to be able to represent the behavior of distributed smart grid assets using reduced-order controllable models and to analyze their impacts on the bulk power system in terms of stability and reliability.

  2. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  3. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  4. Development of distributed computer systems for future nuclear power plants

    International Nuclear Information System (INIS)

    Yan, G.; L'Archeveque, J.V.R.

    1978-01-01

    Dual computers have been used for direct digital control in CANDU power reactors since 1963. However, as reactor plants have grown in size and complexity, some drawbacks to centralized control appear such as, for example, the surprisingly large amount of cabling required for information transmission. Dramatic changes in costs of components and a desire to improve system performance have stimulated a broad-based research and development effort in distribution systems. This paper outlines work in this area

  5. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  6. Distributed Computing for the Pierre Auger Observatory

    International Nuclear Information System (INIS)

    Chudoba, J.

    2015-01-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system. (paper)

  7. Distributed Computing for the Pierre Auger Observatory

    Science.gov (United States)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  8. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2009-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  9. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I; Bradley, D; Livny, M

    2010-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  10. Pseudo-interactive monitoring in distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Sfiligoi, I.; /Fermilab; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  11. Computational aspects of linear control

    CERN Document Server

    2002-01-01

    Many devices (we say dynamical systems or simply systems) behave like black boxes: they receive an input, this input is transformed following some laws (usually a differential equation) and an output is observed. The problem is to regulate the input in order to control the output, that is for obtaining a desired output. Such a mechanism, where the input is modified according to the output measured, is called feedback. The study and design of such automatic processes is called control theory. As we will see, the term system embraces any device and control theory has a wide variety of applications in the real world. Control theory is an interdisci­ plinary domain at the junction of differential and difference equations, system theory and statistics. Moreover, the solution of a control problem involves many topics of numerical analysis and leads to many interesting computational problems: linear algebra (QR, SVD, projections, Schur complement, structured matrices, localization of eigenvalues, computation of the...

  12. Higher order correlations in computed particle distributions

    International Nuclear Information System (INIS)

    Hanerfeld, H.; Herrmannsfeldt, W.; Miller, R.H.

    1989-03-01

    The rms emittances calculated for beam distributions using computer simulations are frequently dominated by higher order aberrations. Thus there are substantial open areas in the phase space plots. It has long been observed that the rms emittance is not an invariant to beam manipulations. The usual emittance calculation removes the correlation between transverse displacement and transverse momentum. In this paper, we explore the possibility of defining higher order correlations that can be removed from the distribution to result in a lower limit to the realizable emittance. The intent is that by inserting the correct combinations of linear lenses at the proper position, the beam may recombine in a way that cancels the effects of some higher order forces. An example might be the non-linear transverse space charge forces which cause a beam to spread. If the beam is then refocused so that the same non-linear forces reverse the inward velocities, the resulting phase space distribution may reasonably approximate the original distribution. The approach to finding the location and strength of the proper lens to optimize the transported beam is based on work by Bruce Carlsten of Los Alamos National Laboratory. 11 refs., 4 figs

  13. Intermittent control: a computational theory of human control.

    Science.gov (United States)

    Gawthrop, Peter; Loram, Ian; Lakie, Martin; Gollee, Henrik

    2011-02-01

    The paradigm of continuous control using internal models has advanced understanding of human motor control. However, this paradigm ignores some aspects of human control, including intermittent feedback, serial ballistic control, triggered responses and refractory periods. It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control. Continuous control is included as a special case, but sampling, system matched hold, an intermittent predictor and an event trigger allow serial open-loop trajectories using intermittent feedback. The implementation here may be described as "continuous observation, intermittent action". Beyond explaining unimodal regulation distributions in common with continuous control, these features naturally explain refractoriness and bimodal stabilisation distributions observed in double stimulus tracking experiments and quiet standing, respectively. Moreover, given that human control systems contain significant time delays, a biological-cybernetic rationale favours intermittent over continuous control: intermittent predictive control is computationally less demanding than continuous predictive control. A standard continuous-time predictive control model of the human operator is used as the underlying design method for an event-driven intermittent controller. It is shown that when event thresholds are small and sampling is regular, the intermittent controller can masquerade as the underlying continuous-time controller and thus, under these conditions, the continuous-time and intermittent controller cannot be distinguished. This explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.

  14. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  15. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Barberis, Dario; Crepe-Renaudin, Sabine Chrystel; De, Kaushik; Fassi, Farida; Stradling, Alden; Svatos, Michal; Vartapetian, Armen; Wolters, Helmut

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing...

  16. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  17. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  18. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  19. Personal computers in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.

    1988-01-01

    The advent of the personal computer has created a popular movement which has also made a strong impact on science and engineering. Flexible software environments combined with good computational performance and large storage capacities are becoming available at steadily decreasing costs. Of equal importance, however, is the quality of the user interface offered on many of these products. Graphics and screen interaction is available in ways that were only possible on specialized systems before. Accelerator engineers were quick to pick up the new technology. The first applications were probably for controllers and data gatherers for beam measurement equipment. Others followed, and today it is conceivable to make personal computer a standard component of an accelerator control system. This paper reviews the experience gained at CERN so far and describes the approach taken in the design of the common control center for the SPS and the future LEP accelerators. The design goal has been to be able to integrate personal computers into the accelerator control system and to build the operator's workplace around it. (orig.)

  20. Computer controlled testing of batteries

    NARCIS (Netherlands)

    Kuiper, A.C.J.; Einerhand, R.E.F.; Visscher, W.

    1989-01-01

    A computerized testing device for batteries consists of a power supply, a multiplexer circuit connected to the batteries, a protection circuit, and an IBM Data Aquisition and Control Adapter card, connected to a personal computer. The software is written in Turbo-Pascal and can be easily adapted to

  1. System-wide power management control via clock distribution network

    Science.gov (United States)

    Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

    2015-05-19

    An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

  2. IKONET: distributed accelerator and experiment control

    International Nuclear Information System (INIS)

    Koldewijn, P.

    1986-01-01

    IKONET is a network consisting of some 35 computers used to control the 500 MeV Medium Energy Amsterdam electron accelerator (MEA) and its various experiments. The control system is distributed over a whole variety of machines, which are combined in a transparent central-oriented network. The local hardware is switched and tuned via Camac by a series of mini-computers with a real-time multitask operating system. Larger systems provide central intelligence for the higher-level control layers. An image of the complete accelerator settings is maintained by central database administrators. Different operator facilities handle touchpanels, multi-purpose knobs and graphical displays. The network provides remote login facilities and file servers. On basis of the present layout, an overview is given of future developments for subsystems of the network. (Auth.)

  3. The future of PanDA in ATLAS distributed computing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  4. 10th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Seghrouchni, Amal; Beynier, Aurélie; Camacho, David; Herpson, Cédric; Hindriks, Koen; Novais, Paulo

    2017-01-01

    This book presents the combined peer-reviewed proceedings of the tenth International Symposium on Intelligent Distributed Computing (IDC’2016), which was held in Paris, France from October 10th to 12th, 2016. The 23 contributions address a range of topics related to theory and application of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  5. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  6. Concepts for Distributed Engine Control

    Science.gov (United States)

    Culley, Dennis E.; Thomas, Randy; Saus, Joseph

    2007-01-01

    Gas turbine engines for aero-propulsion systems are found to be highly optimized machines after over 70 years of development. Still, additional performance improvements are sought while reduction in the overall cost is increasingly a driving factor. Control systems play a vitally important part in these metrics but are severely constrained by the operating environment and the consequences of system failure. The considerable challenges facing future engine control system design have been investigated. A preliminary analysis has been conducted of the potential benefits of distributed control architecture when applied to aero-engines. In particular, reductions in size, weight, and cost of the control system are possible. NASA is conducting research to further explore these benefits, with emphasis on the particular benefits enabled by high temperature electronics and an open-systems approach to standardized communications interfaces.

  7. Xcache in the ATLAS Distributed Computing Environment

    CERN Document Server

    Hanushevsky, Andrew; The ATLAS collaboration

    2018-01-01

    Built upon the Xrootd Proxy Cache (Xcache), we developed additional features to adapt the ATLAS distributed computing and data environment, especially its data management system RUCIO, to help improve the cache hit rate, as well as features that make the Xcache easy to use, similar to the way the Squid cache is used by the HTTP protocol. We are optimizing Xcache for the HPC environments, and adapting the HL-LHC Data Lakes design as its component for data delivery. We packaged the software in CVMFS, in Docker and Singularity containers in order to standardize the deployment and reduce the cost to resolve issues at remote sites. We are also integrating it into RUCIO as a volatile storage systems, and into various ATLAS workflow such as user analysis,

  8. Low cost highly available digital control computer

    International Nuclear Information System (INIS)

    Silvers, M.W.

    1986-01-01

    When designing digital controllers for critical plant control it is important to provide several features. Among these are reliability, availability, maintainability, environmental protection, and low cost. An examination of several applications has lead to a design that can be produced for approximately $20,000 (1000 control points). This design is compatible with modern concepts in distributed and hierarchical control. The canonical controller element is a dual-redundant self-checking computer that communicates with a cross-strapped, electrically isolated input/output system. The input/output subsystem comprises multiple intelligent input/output cards. These cards accept commands from the primary processor which are validated, executed, and acknowledged. Each card may be hot replaced to facilitate sparing. The implementation of the dual-redundant computer architecture is discussed. Called the FS-86, this computer can be used for a variety of applications. It has most recently found application in the upgrade of San Francisco's Bay Area Rapid Transit (BART) train control currently in progress and has been proposed for feedwater control in a boiling water reactor

  9. Proceedings of workshop on distributed computing and network

    International Nuclear Information System (INIS)

    Abe, F.; Yuasa, F.

    1993-02-01

    'Distributed Computing and Network' is one of hot topics in the field of computing. Recent progress in the computer technology is providing new paradigm for computing even in High Energy Physics. Particularly the workstation based computer system is opening new active field of computer application to sciences. The major topics discussed in this symposium are distributed computing and wide area research network for domestic and international link. The two days symposium provided so enough topics to foresee the next direction of our computing environment. 70 people have got together to discuss on these interesting thema as well as information exchange on the computer technologies. (J.P.N.)

  10. DISTRIBUTED COMPUTING SUPPORT CONTRACT USER SURVEY

    CERN Multimedia

    2001-01-01

    IT Division operates a Distributed Computing Support Service, which offers support to owners and users of all variety of desktops throughout CERN as well as more dedicated services for certain groups, divisions and experiments. It also provides the staff who operate the central and satellite Computing Helpdesks, it supports printers throughout the site and it provides the installation activities of the IT Division PC Service. We have published a questionnaire which seeks to gather your feedback on how the services are seen, how they are progressing and how they can be improved. Please take a few minutes to fill in this questionnaire. Replies will be treated in confidence if desired although you may also request an opportunity to be contacted by CERN's service management directly. Please tell us if you met problems but also if you had a successful conclusion to your request for assistance. You will find the questionnaire at the web site http://wwwinfo/support/survey/desktop-contract There will also be a link ...

  11. DISTRIBUTED COMPUTING SUPPORT SERVICE USER SURVEY

    CERN Multimedia

    2001-01-01

    IT Division operates a Distributed Computing Support Service, which offers support to owners and users of all variety of desktops throughout CERN as well as more dedicated services for certain groups, divisions and experiments. It also provides the staff who operate the central and satellite Computing Helpdesks, it supports printers throughout the site and it provides the installation activities of the IT Division PC Service. We have published a questionnaire, which seeks to gather your feedback on how the services are seen, how they are progressing and how they can be improved. Please take a few minutes to fill in this questionnaire. Replies will be treated in confidence if desired although you may also request an opportunity to be contacted by CERN's service management directly. Please tell us if you met problems but also if you had a successful conclusion to your request for assistance. You will find the questionnaire at the web site http://wwwinfo/support/survey/desktop-contract There will also be a link...

  12. Distributed computing for FTU data handling

    Energy Technology Data Exchange (ETDEWEB)

    Bertocchi, A. E-mail: bertocchi@frascati.enea.it; Bracco, G.; Buceti, G.; Centioli, C.; Giovannozzi, E.; Iannone, F.; Panella, M.; Vitale, V

    2002-06-01

    The growth of data warehouse in tokamak experiment is leading fusion laboratories to provide new IT solutions in data handling. In the last three years, the Frascati Tokamak Upgrade (FTU) experimental database was migrated from IBM-mainframe to Unix distributed computing environment. The migration efforts have taken into account the following items: (1) a new data storage solution based on storage area network over fibre channel; (2) andrew file system (AFS) for wide area network file sharing; (3) 'one measure/one file' philosophy replacing 'one shot/one file' to provide a faster read/write data access; (4) more powerful services, such as AFS, CORBA and MDSplus to allow users to access FTU database from different clients, regardless their O.S.; (5) large availability of data analysis tools, from the locally developed utility SHOW to the multi-platform Matlab, interactive data language and jScope (all these tools are now able to access also the Joint European Torus data, in the framework of the remote data access activity); (6) a batch-computing cluster of Alpha/CompaqTru64 CPU based on CODINE/GRD to optimize utilization of software and hardware resources.

  13. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  14. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  15. Control by personal computer and Interface 1

    International Nuclear Information System (INIS)

    Kim, Eung Mug; Park, Sun Ho

    1989-03-01

    This book consists of three chapters. The first chapter deals with basic knowledge of micro computer control which are computer system, micro computer system, control of the micro computer and control system for calculator. The second chapter describes Interface about basic knowledge such as 8255 parallel interface, 6821 parallel interface, parallel interface of personal computer, reading BCD code in parallel interface, IEEE-488 interface, RS-232C interface and transmit data in personal computer and a measuring instrument. The third chapter includes control experiment by micro computer, experiment by eight bit computer and control experiment by machine code and BASIC.

  16. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  17. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  18. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  19. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  20. Computation and control with neural nets

    Energy Technology Data Exchange (ETDEWEB)

    Corneliusen, A.; Terdal, P.; Knight, T.; Spencer, J.

    1989-10-04

    As energies have increased exponentially with time so have the size and complexity of accelerators and control systems. NN may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways e.g. via token, slotted or parallel rings (or Steiner trees) for compatibility with existing systems. Problems and advantages of this approach such as an optimal, real-time Turing machine are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future microprocessors' are predicted and requested on this basis. 19 refs., 18 figs.

  1. Computation and control with neural nets

    International Nuclear Information System (INIS)

    Corneliusen, A.; Terdal, P.; Knight, T.; Spencer, J.

    1989-01-01

    As energies have increased exponentially with time so have the size and complexity of accelerators and control systems. NN may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways e.g. via token, slotted or parallel rings (or Steiner trees) for compatibility with existing systems. Problems and advantages of this approach such as an optimal, real-time Turing machine are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future 'microprocessors' are predicted and requested on this basis. 19 refs., 18 figs

  2. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  3. Computer controls for the WITCH experiment

    CERN Document Server

    Tandecki, M; Van Gorp, S; Friedag, P; De Leebeeck, V; Beck, D; Brand, H; Weinheimer, C; Breitenfeldt, M; Traykov, E; Mader, J; Roccia, S; Severijns, N; Herlert, A; Wauters, F; Zakoucky, D; Kozlov, V; Soti, G

    2011-01-01

    The WITCH experiment is a medium-scale experimental set-up located at ISOLDE/CERN. It combines a double Penning trap system with,a retardation spectrometer for energy measurements of recoil ions from beta decay. For a correct operation of such a set-up a whole range of different devices is required. Along with the installation and optimization of the set-up a computer control system was developed to control these devices. The CS-Framework that is developed and maintained at GSI, was chosen as a basis for this control system as it is perfectly suited to handle the distributed nature of a control system.We report here on the required hardware for WITCH, along with the basis of this CS-Framework and the add-ons that were implemented for WITCH. (C) 2010 Elsevier B.V. All rights reserved.

  4. Integrating Xgrid into the HENP distributed computing model

    International Nuclear Information System (INIS)

    Hajdu, L; Lauret, J; Kocoloski, A; Miller, M

    2008-01-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology

  5. Integrating Xgrid into the HENP distributed computing model

    Science.gov (United States)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  6. The CESR computer control system

    International Nuclear Information System (INIS)

    Helmke, R.G.; Rice, D.H.; Strohman, C.

    1986-01-01

    The control system for the Cornell Electron Storage Ring (CESR) has functioned satisfactorily since its implementation in 1979. Key characteristics are fast tuning response, almost exclusive use of FORTRAN as a programming language, and efficient coordinated ramping of CESR guide field elements. This original system has not, however, been able to keep pace with the increasing complexity of operation of CESR associated with performance upgrades. Limitations in address space, expandability, access to data system-wide, and program development impediments have prompted the undertaking of a major upgrade. The system under development accomodates up to 8 VAX computers for all applications programs. The database and communications semaphores reside in a shared multi-ported memory, and each hardware interface bus is controlled by a dedicated 32 bit micro-processor in a VME based system. (orig.)

  7. Quantum Internet: from Communication to Distributed Computing!

    OpenAIRE

    Caleffi, Marcello; Cacciapuoti, Angela Sara; Bianchi, Giuseppe

    2018-01-01

    In this invited paper, the authors discuss the exponential computing speed-up achievable by interconnecting quantum computers through a quantum internet. They also identify key future research challenges and open problems for quantum internet design and deployment.

  8. Computer systems for nuclear installation data control

    International Nuclear Information System (INIS)

    1987-09-01

    The computer programs developed by Divisao de Instalacoes Nucleares (DIN) from Brazilian CNEN for data control on nuclear installations in Brazil are presented. The following computer programs are described: control of registered companies, control of industrial sources, irradiators and monitors; control of liable person; control of industry irregularities; for elaborating credence tests; for shielding analysis; control of waste refuge [pt

  9. Earth observation scientific workflows in a distributed computing environment

    CSIR Research Space (South Africa)

    Van Zyl, TL

    2011-09-01

    Full Text Available capabilities has focused on the web services approach as exemplified by the OGC's Web Processing Service and by GRID computing. The approach to leveraging distributed computing resources described in this paper uses instead remote objects via RPy...

  10. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  11. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  12. Distributed Computations Environment Protection Using Artificial Immune Systems

    Directory of Open Access Journals (Sweden)

    A. V. Moiseev

    2011-12-01

    Full Text Available In this article the authors describe possibility of artificial immune systems applying for distributed computations environment protection from definite types of malicious impacts.

  13. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  14. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    communication system, which allows for computerized support. ... Keywords: Distributed computing system; Petri nets;Workflow management. 1. ... A distributed operating system usually .... the questionnaire is returned with invalid data,.

  15. Alidron, A distributed control system for the Internet of Things

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Making many devices discover and interact with each other is the big challenge ahead of the IoT. Alidron project aims at finding a different approach based on features seen in industrial control systems, with a distributed twist while keeping a fuzzy limit between edge computing and cloud computing.

  16. Parallel and distributed processing in power system simulation and control

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, Djalma M [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Recent advances in computer technology will certainly have a great impact in the methodologies used in power system expansion and operational planning as well as in real-time control. Parallel and distributed processing are among the new technologies that present great potential for application in these areas. Parallel computers use multiple functional or processing units to speed up computation while distributed processing computer systems are collection of computers joined together by high speed communication networks having many objectives and advantages. The paper presents some ideas for the use of parallel and distributed processing in power system simulation and control. It also comments on some of the current research work in these topics and presents a summary of the work presently being developed at COPPE. (author) 53 refs., 2 figs.

  17. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  18. Successful initiation of and management through a distributed computer upgrade

    International Nuclear Information System (INIS)

    Barich, F.T.; Crawford, T.H.

    1995-01-01

    Processing capacity, the lack of data analysis tools, obsolescence, and spare parts issues are forcing utilities to upgrade or replace their plant computer systems with newer, larger systems. As a result, the utility faces an increasing number of new technologies, such as fiber optics and communication standards (FDDI, ATM, etc.), Graphic User Interface using X-Windows, and distributed architectures that eliminate the host based computer. Technologies such as these, if properly applied, can greatly enhance the capabilities and functions of the existing system. Besides this, the utility also faces funtionality previously not available through the plant computer, such as integrated plant monitoring and digital controls, voice, imaging, etc. With computing technology vastly changing from traditional host systems, the utility confronts the question, open-quotes what are my needs (now and for the future), and what new system can meet those needs most effectively?close quotes. This paper describes the management process necessary to define the needs and then carry out a successful computer replacement project

  19. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  20. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  1. Magnetic compatibility of standard components for electrical installations: Computation of the background field and consequences on the design of the electrical distribution boards and control boards for the ITER Tokamak building

    International Nuclear Information System (INIS)

    Benfatto, I.; Bettini, P.; Cavinato, M.; Lorenzi, A. De; Hourtoule, J.; Serra, E.

    2005-01-01

    Inside the proposed Tokamak building, the ITER poloidal field magnet system would produce a stray magnetic field up to 70 mT. This is a very unusual environmental condition for electrical installation equipment and limited information is available on the magnetic compatibility of standard components for electrical distribution boards and control boards. Because this information is a necessary input for the design of the electrical installation inside the proposed ITER Tokamak building specific investigations have been carried out by the ITER European Participant Team. The paper reports on the computation of the background magnetic field map inside the ITER Tokamak building and the consequences on the design of the electrical installations of this building. The effects of the steel inside the building structure and the feasibility of magnetic shields for electrical distribution boards and control boards are also reported in the paper. The results of the test campaigns on the magnetic field compatibility of standard components for electrical distribution boards and control boards are reported in companion papers published in these proceedings

  2. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  3. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  4. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  5. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  6. Distributed cooperative control of AC microgrids

    Science.gov (United States)

    Bidram, Ali

    In this dissertation, the comprehensive secondary control of electric power microgrids is of concern. Microgrid technical challenges are mainly realized through the hierarchical control structure, including primary, secondary, and tertiary control levels. Primary control level is locally implemented at each distributed generator (DG), while the secondary and tertiary control levels are conventionally implemented through a centralized control structure. The centralized structure requires a central controller which increases the reliability concerns by posing the single point of failure. In this dissertation, the distributed control structure using the distributed cooperative control of multi-agent systems is exploited to increase the secondary control reliability. The secondary control objectives are microgrid voltage and frequency, and distributed generators (DGs) active and reactive powers. Fully distributed control protocols are implemented through distributed communication networks. In the distributed control structure, each DG only requires its own information and the information of its neighbors on the communication network. The distributed structure obviates the requirements for a central controller and complex communication network which, in turn, improves the system reliability. Since the DG dynamics are nonlinear and non-identical, input-output feedback linearization is used to transform the nonlinear dynamics of DGs to linear dynamics. Proposed control frameworks cover the control of microgrids containing inverter-based DGs. Typical microgrid test systems are used to verify the effectiveness of the proposed control protocols.

  7. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.

    1989-01-01

    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  8. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  9. Distributed control system for the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Batchelor, K.; Culwick, B.B.; Goldstick, J.; Sheehan, J.; Smith, J.

    1979-01-01

    Until recently, accelerator and similar control systems have used modular interface hardware such as CAMAC or DATACON which translated digital computer commands transmitted over some data link into hardware device status and monitoring variables. Such modules possessed little more than local buffering capability in the processing of commands and data. The advent of the micro-processor has made available low cost small computers of significant computational capability. This paper describes how micro-computers including such micro-processors and associated memory, input/output devices and interrupt facilities have been incorporated into a distributed system for the control of the NSLS

  10. COMPUTER CONTROL OF BEHAVIORAL EXPERIMENTS.

    Science.gov (United States)

    SIEGEL, LOUIS

    THE LINC COMPUTER PROVIDES A PARTICULAR SCHEDULE OF REINFORCEMENT FOR BEHAVIORAL EXPERIMENTS BY EXECUTING A SEQUENCE OF COMPUTER OPERATIONS IN CONJUNCTION WITH A SPECIALLY DESIGNED INTERFACE. THE INTERFACE IS THE MEANS OF COMMUNICATION BETWEEN THE EXPERIMENTAL CHAMBER AND THE COMPUTER. THE PROGRAM AND INTERFACE OF AN EXPERIMENT INVOLVING A PIGEON…

  11. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  12. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  13. Towards an Approach of Semantic Access Control for Cloud Computing

    Science.gov (United States)

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  14. Applying Distributed Object Technology to Distributed Embedded Control Systems

    DEFF Research Database (Denmark)

    Jørgensen, Bo Nørregaard; Dalgaard, Lars

    2012-01-01

    In this paper, we describe our Java RMI inspired Object Request Broker architecture MicroRMI for use with networked embedded devices. MicroRMI relieves the software developer from the tedious and error-prone job of writing communication protocols for interacting with such embedded devices. MicroR...... in developing control systems for distributed embedded platforms possessing severe resource restrictions.......RMI supports easy integration of high-level application specific control logic with low-level device specific control logic. Our experience from applying MicroRMI in the context of a distributed robotics control application, clearly demonstrates that it is feasible to use distributed object technology...

  15. Power distribution monitoring and control in the RBMK type reactors

    International Nuclear Information System (INIS)

    Emel'yanov, I.Ya.; Postnikov, V.V.; Volod'ko, Yu.I.

    1980-01-01

    Considered are the structures of monitoring and control systems for the RBMK-1000 reactor including three main systems with high independence: the control and safety system (CSS); the system for physical control of energy distribution (SPCED) as well as the Scala system for centralized control (SCC). Main functions and peculiarities of each system are discussed. Main attention is paid to new structural solutions and new equipment components used in these systems. Described are the RBMK operation software and routine of energy distribution control in it. It is noted that the set of reactor control and monitoring systems has a hierarchical structure, the first level of which includes analog systems (CSS and SPCED) normalizing and transmitting detector signals to the systems of the second level based on computers and realizing computer data processing, data representation to the operator, automatic (through CSS) control for energy distribution, diagnostics of equipment condition and local safety with provision for existing reserves with respect to crisis and thermal loading of fuel assemblies. The third level includes a power computer carrying out complex physical and optimization calculations and providing interconnections with the external computer of power system. A typical feature of the complex is the provision of local automatic safety of the reactor from erroneous withdrawal of any control rod. The complex is designed for complete automatization of energy distribution control in reactor in steady and transient operation conditions

  16. From parallel to distributed computing for reactive scattering calculations

    International Nuclear Information System (INIS)

    Lagana, A.; Gervasi, O.; Baraglia, R.

    1994-01-01

    Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated

  17. A Weibull distribution accrual failure detector for cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  18. Distributed computing environment monitoring and user expectations

    International Nuclear Information System (INIS)

    Cottrell, R.L.A.; Logg, C.A.

    1996-01-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring. (author)

  19. Integrating Xgrid into the HENP distributed computing model

    Energy Technology Data Exchange (ETDEWEB)

    Hajdu, L; Lauret, J [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kocoloski, A; Miller, M [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)], E-mail: kocolosk@mit.edu

    2008-07-15

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  20. Tools for remote computing in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.; Frammery, V.; Wilcke, R.

    1990-01-01

    In modern accelerator control systems, the intelligence of the equipment is distributed in the geographical and the logical sense. Control processes for a large variety of tasks reside in both the equipment and the control computers. Hence successful operation hinges on the availability and reliability of the communication infrastructure. The computers are interconnected by a communication system and use remote procedure calls and message passing for information exchange. These communication mechanisms need a well-defined convention, i.e. a protocol. They also require flexibility in both the setup and changes to the protocol specification. The network compiler is a tool which provides the programmer with a means of establishing such a protocol for his application. Input to the network compiler is a single interface description file provided by the programmer. This file is written according to a grammar, and completely specifies the interprocess communication interfaces. Passed through the network compiler, the interface description file automatically produces the additional source code needed for the protocol. Hence the programmer does not have to be concerned about the details of the communication calls. Any further additions and modifications are made easy, because all the information about the interface is kept in a single file. (orig.)

  1. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  2. A lightweight communication library for distributed computing

    NARCIS (Netherlands)

    Groen, D.; Rieder, S.; Grosso, P.; de Laat, C.; Portegies Zwart, S.

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The

  3. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  4. Distributed computing environment monitoring and user expectations

    International Nuclear Information System (INIS)

    Cottrell, R.L.A.; Logg, C.A.

    1995-11-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes on to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local Area Network (LAN), network services and applications, the Wide Area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring

  5. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  6. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  7. A lightweight communication library for distributed computing

    International Nuclear Information System (INIS)

    Groen, Derek; Rieder, Steven; Zwart, Simon Portegies; Grosso, Paola; Laat, Cees de

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.

  8. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  9. Computer control of shielded cell operations

    International Nuclear Information System (INIS)

    Jeffords, W.R. III.

    1987-01-01

    This paper describes in detail a computer system to remotely control shielded cell operations. System hardware, software, and design criteria are discussed. We have designed a computer-controlled buret that provides a tenfold improvement over the buret currently in service. A computer also automatically controls cell analyses, calibrations, and maintenance. This system improves conditions for the operators by providing a safer, more efficient working environment and is expandable for future growth and development

  10. On the relevance of efficient, integrated computer and network monitoring in HEP distributed online environment

    CERN Document Server

    Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G

    1996-01-01

    Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...

  11. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    Science.gov (United States)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  12. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  13. Power Generation and Distribution via Distributed Coordination Control

    OpenAIRE

    Kim, Byeong-Yeon; Oh, Kwang-Kyo; Ahn, Hyo-Sung

    2014-01-01

    This paper presents power coordination, power generation, and power flow control schemes for supply-demand balance in distributed grid networks. Consensus schemes using only local information are employed to generate power coordination, power generation and power flow control signals. For the supply-demand balance, it is required to determine the amount of power needed at each distributed power node. Also due to the different power generation capacities of each power node, coordination of pow...

  14. Dedicated Programming Language for Small Distributed Control Divices

    DEFF Research Database (Denmark)

    Madsen, Per Printz; Borch, Ole

    2007-01-01

    . This paper describes a new, flexible and simple language for programming distributed control tasks. The compiler for this language generates a target code that is very easy to interpret. A interpreter, that can be easy ported to different hardwares, is described. The new language is simple and easy to learn...... become a reality if each of these controlling computers can be configured to perform a cooperative task. This again requires the necessary communicating facilities. In other words this requires that all these simple and distributed computers can be programmed in a simple and hardware independent way...

  15. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  16. A History of Computer Numerical Control.

    Science.gov (United States)

    Haggen, Gilbert L.

    Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…

  17. 7th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Jung, Jason; Badica, Costin

    2014-01-01

    This book represents the combined peer-reviewed proceedings of the Seventh International Symposium on Intelligent Distributed Computing - IDC-2013, of the Second Workshop on Agents for Clouds - A4C-2013, of the Fifth International Workshop on Multi-Agent Systems Technology and Semantics - MASTS-2013, and of the International Workshop on Intelligent Robots - iR-2013. All the events were held in Prague, Czech Republic during September 4-6, 2013. The 41 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, bio-informatics, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, intelligent robotics, knowledge management, linked data, mobile agents, ontologies, pervasive computing, self-organizing systems, peer-to-peer computing, social networks and trust, and swarm intelligence.  .

  18. A computer controlled tele-cobalt unit

    International Nuclear Information System (INIS)

    Brace, J.A.

    1982-01-01

    A computer controlled cobalt treatment unit was commissioned for treating patients in January 1980. Initially the controlling computer was a minicomputer, but now the control of the therapy unit is by a microcomputer. The treatment files, which specify the movement and configurations necessary to deliver the prescribed dose, are produced on the minicomputer and then transferred to the microcomputer using minitape cartridges. The actual treatment unit is based on a standard cobalt unit with a few additional features e.g. the drive motors can be controlled either by the computer or manually. Since the treatment unit is used for both manual and automatic treatments, the operational procedure under computer control is made to closely follow the manual procedure for a single field treatment. The necessary safety features which protect against human, hardware and software errors as well as the advantages and disadvantages of computer controlled radiotherapy are discussed

  19. Clock distribution system for digital computers

    International Nuclear Information System (INIS)

    Loomis, H.H.; Wyman, R.H.

    1981-01-01

    An apparatus is disclosed for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse ''overtaking'' a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component v'01(T); an array of N signal characteristic detector means, with detector means no. 1 receiving the timing means signal and producing a change-of-state signal v1(T) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal vn(T) and producing a modified change-of-state signal v'n(T) (N 1, . . . , n) having a fundamental frequency component that is substantially proportional to v'01(T- theta n(T) with a cumulative phase shift theta n(T) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1 < or = n< n) receiving a modified change-of-state signal vn(T) from filter means no. N and, in response to receipt of such a signal above a predetermined threshold, producing a change-of-state signal vn+1

  20. Logical design for computers and control

    CERN Document Server

    Dodd, Kenneth N

    1972-01-01

    Logical Design for Computers and Control Logical Design for Computers and Control gives an introduction to the concepts and principles, applications, and advancements in the field of control logic. The text covers topics such as logic elements; high and low logic; kinds of flip-flops; binary counting and arithmetic; and Boolean algebra, Boolean laws, and De Morgan's theorem. Also covered are topics such as electrostatics and atomic theory; the integrated circuit and simple control systems; the conversion of analog to digital systems; and computer applications and control. The book is recommend

  1. Actors: A Model of Concurrent Computation in Distributed Systems.

    Science.gov (United States)

    1985-06-01

    Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI

  2. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  3. Tactical Airborne Distributed Computing and Networks

    Science.gov (United States)

    1981-10-01

    an CnRlni-.Cj , qui ost utilis6 pour Xcuerrcpind iot q~eol CNQ on a un ’R(E~ .gui ost utilisAs pour damr-ndor la~~~~~~ ~ rernmsinLeWmot ulCP] Lea...function can result in the lailure of that tunction and cause the m.. s,.: iot , to be abandoned. For a safety critical function there is an add.iional...Controller; AP-101 interface. 30-6 ENABLE TO SRIALMANCHESTER MODULATOR CONVERTER ENCODER IDRIVER ]J .I BUS CONTROLLER - NETWORK INTERFACE Figure 5. Bus

  4. 9th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Camacho, David; Analide, Cesar; Seghrouchni, Amal; Badica, Costin

    2016-01-01

    This book represents the combined peer-reviewed proceedings of the ninth International Symposium on Intelligent Distributed Computing – IDC’2015, of the Workshop on Cyber Security and Resilience of Large-Scale Systems – WSRL’2015, and of the International Workshop on Future Internet and Smart Networks – FI&SN’2015. All the events were held in Guimarães, Portugal during October 7th-9th, 2015. The 46 contributions published in this book address many topics related to theory and applications of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  5. Distribution control centers in the Croatian power system with particular consideration on ZAgreb distribution control center

    International Nuclear Information System (INIS)

    Cupin, N.

    2000-01-01

    Discussion about control of Croatian Power system in the view of forthcoming free electricity market did not included do far distribution level. With this article we would like to clarify the role of distribution control centers pointing out importance of Zagreb Distribution control center, with controls one third of Croatian (HEP) consumption. (author)

  6. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    Science.gov (United States)

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use.

  7. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  8. Distributed and recoverable digital control system

    Science.gov (United States)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A real-time multi-tasking digital control system with rapid recovery capability is disclosed. The control system includes a plurality of computing units comprising a plurality of redundant processing units, with each of the processing units configured to generate one or more redundant control commands. One or more internal monitors are employed for detecting data errors in the control commands. One or more recovery triggers are provided for initiating rapid recovery of a processing unit if data errors are detected. The control system also includes a plurality of actuator control units each in operative communication with the computing units. The actuator control units are configured to initiate a rapid recovery if data errors are detected in one or more of the processing units. A plurality of smart actuators communicates with the actuator control units, and a plurality of redundant sensors communicates with the computing units.

  9. Developing a Distributed Computing Architecture at Arizona State University.

    Science.gov (United States)

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  10. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  11. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    Energy Technology Data Exchange (ETDEWEB)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software

  12. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  13. Control of distributed systems : tutorial and overview

    Czech Academy of Sciences Publication Activity Database

    van Schuppen, J. H.; Boutin, O.; Kempker, P.L.; Komenda, Jan; Masopust, Tomáš; Pambakian, N.; Ran, A.C.M.

    2011-01-01

    Roč. 17, 5-6 (2011), s. 579-602 ISSN 0947-3580 R&D Projects: GA ČR(CZ) GAP103/11/0517; GA ČR(CZ) GPP202/11/P028 Institutional research plan: CEZ:AV0Z10190503 Keywords : distributed system * coordination control * hierarchical control * distributed control * distributed control with communication Subject RIV: BA - General Mathematics Impact factor: 0.817, year: 2011 http://ejc.revuesonline.com/article.jsp?articleId=16873

  14. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    International Nuclear Information System (INIS)

    Andreeva, J; Campos, M Devesas; Cros, J Tarragon; Gaidioz, B; Karavakis, E; Kokoszkiewicz, L; Lanciotti, E; Maier, G; Ollivier, W; Nowotka, M; Rocha, R; Sadykov, T; Saiz, P; Sargsyan, L; Sidorova, I; Tuckett, D

    2011-01-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  15. Intelligent Control and Operation of Distribution System

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad

    methodology to ensure efficient control and operation of the future distribution networks. The major scientific challenge is thus to develop control models and strategies to coordinate responses from widely distributed controllable loads and local generations. Detailed models of key Smart Grid (SG) elements...... in this direction but also benefit distribution system operators in the planning and development of the distribution network. The major contributions of this work are described in the following four stages: In the first stage, an intelligent Demand Response (DR) control architecture is developed for coordinating...... the key SG actors, namely consumers, network operators, aggregators, and electricity market entities. A key intent of the architecture is to facilitate market participation of residential consumers and prosumers. A Hierarchical Control Architecture (HCA) having primary, secondary, and tertiary control...

  16. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  17. Computer control for remote wind turbine operation

    Energy Technology Data Exchange (ETDEWEB)

    Manwell, J.F.; Rogers, A.L.; Abdulwahid, U.; Driscoll, J. [Univ. of Massachusetts, Amherst, MA (United States)

    1997-12-31

    Light weight wind turbines located in harsh, remote sites require particularly capable controllers. Based on extensive operation of the original ESI-807 moved to such a location, a much more sophisticated controller than the original one has been developed. This paper describes the design, development and testing of that new controller. The complete control and monitoring system consists of sensor and control inputs, the control computer, control outputs, and additional equipment. The control code was written in Microsoft Visual Basic on a PC type computer. The control code monitors potential faults and allows the turbine to operate in one of eight states: off, start, run, freewheel, low wind shut down, normal wind shutdown, emergency shutdown, and blade parking. The controller also incorporates two {open_quotes}virtual wind turbines,{close_quotes} including a dynamic model of the machine, for code testing. The controller can handle numerous situations for which the original controller was unequipped.

  18. 9th International conference on distributed computing and artificial intelligence

    CERN Document Server

    Santana, Juan; González, Sara; Molina, Jose; Bernardos, Ana; Rodríguez, Juan; DCAI 2012; International Symposium on Distributed Computing and Artificial Intelligence 2012

    2012-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2012 (DCAI 2012) is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. This conference is a forum in which  applications of innovative techniques for solving complex problems will be presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and indus...

  19. Distributed computing and artificial intelligence : 10th International Conference

    CERN Document Server

    Neves, José; Rodriguez, Juan; Santana, Juan; Gonzalez, Sara

    2013-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2013 (DCAI 2013) is a forum in which applications of innovative techniques for solving complex problems are presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. This conference is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and industry se...

  20. Holonic Approach for Control and Coordination of Distributed Sensors

    Science.gov (United States)

    2008-08-01

    holons to interact natively with a virtual world. 19Java Agent DEvelopment Framework (JADE), FIPA-OS, ZEUS, Java Agent Services API (JAS), Multi...High-Level Communication and Control in a Distributed Problem Solver, IEEE Transactions on Computers, C-29(12), 1104 –1113. [24] Duffie, N. and Piper

  1. Disk access controller for Multi 8 computer

    International Nuclear Information System (INIS)

    Segalard, Jean

    1970-01-01

    After having presented the initial characteristics and weaknesses of the software provided for the control of a memory disk coupled with a Multi 8 computer, the author reports the development and improvement of this controller software. He presents the different constitutive parts of the computer and the operation of the disk coupling and of the direct access to memory. He reports the development of the disk access controller: software organisation, loader, subprograms and statements

  2. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  3. Soft computing in intelligent control

    CERN Document Server

    Jung, Jin-Woo; Kubota, Naoyuki

    2014-01-01

    Nowadays, people have tendency to be fond of smarter machines that are able to collect data, make learning, recognize things, infer meanings, communicate with human and perform behaviors. Thus, we have built advanced intelligent control affecting all around societies; automotive, rail, aerospace, defense, energy, healthcare, telecoms and consumer electronics, finance, urbanization. Consequently, users and consumers can take new experiences through the intelligent control systems. We can reshape the technology world and provide new opportunities for industry and business, by offering cost-effective, sustainable and innovative business models. We will have to know how to create our own digital life. The intelligent control systems enable people to make complex applications, to implement system integration and to meet society’s demand for safety and security. This book aims at presenting the research results and solutions of applications in relevance with intelligent control systems. We propose to researchers ...

  4. Intelligent distributed control for nuclear power plants

    International Nuclear Information System (INIS)

    Klevans, E.H.

    1991-01-01

    In September of 1989 work began on the DOE University Program grant DE-FG07-89ER12889. The grant provides support for a three year project to develop and demonstrate Intelligent Distributed Control (IDC) for Nuclear Power Plants. The body of this First Annual Technical Progress report summarizes the first year tasks while the appendices provide detailed information presented at conference meetings. One major addendum report, authored by M.A. Schultz, describes the ultimate goals and projected structure of an automatic distributed control system for EBR-2. The remaining tasks of the project develop specific implementations of various components required to demonstrate the intelligent distributed control concept

  5. Recent Technology Advances in Distributed Engine Control

    Science.gov (United States)

    Culley, Dennis

    2017-01-01

    This presentation provides an overview of the work performed at NASA Glenn Research Center in distributed engine control technology. This is control system hardware technology that overcomes engine system constraints by modularizing control hardware and integrating the components over communication networks.

  6. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  7. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  8. The CANDU 9 distributed control system design process

    International Nuclear Information System (INIS)

    Harber, J.E.; Kattan, M.K.; Macbeth, M.J.

    1997-01-01

    Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. Plant control for the CANDU 9 station design is performed by a distributed control system (DCS) as compared to centralized control computers, analog control devices and relay logic used in previous CANDU designs. The selection of a DCS as the platform to perform the process control functions and most of the data acquisition of the plant, is consistent with the evolutionary nature of the CANDU technology. The control strategies for the DCS control programs are based on previous CANDU designs but are implemented on a new hardware platform taking advantage of advances in computer technology. This paper describes the design process for developing the CANDU 9 DCS. Various design activities, prototyping and analyses have been undertaken in order to ensure a safe, functional, and cost-effective design. (author)

  9. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  10. Distributed control system for CANDU 9 nuclear power plant

    International Nuclear Information System (INIS)

    Harber, J.E.; Kattan, M.K.; Macbeth, M.J.

    1996-01-01

    Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. The CANDU 9 plant monitoring, annunciation, and control functions are implemented in two evolutionary systems; the distributed control system (DCS) and the plant display system (PDS). The CDS implements most of the plant control functions in a single hardware platform. The DCS communicates with the PDS to provide the main operator interface and annunciation capabilities of the previous control computer designs along with human interface enhancements required in a modern control system. (author)

  11. A computable type theory for control systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); L. Guo; J. Baillieul

    2009-01-01

    htmlabstractIn this paper, we develop a theory of computable types suitable for the study of control systems. The theory uses type-two effectivity as the underlying computational model, but we quickly develop a type system which can be manipulated abstractly, but for which all allowable operations

  12. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    Science.gov (United States)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  13. Game-Theoretic Learning in Distributed Control

    KAUST Repository

    Marden, Jason R.; Shamma, Jeff S.

    2018-01-01

    from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules

  14. Controlling Laboratory Processes From A Personal Computer

    Science.gov (United States)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  15. Cloud manufacturing distributed computing technologies for global and sustainable manufacturing

    CERN Document Server

    Mehnen, Jörn

    2013-01-01

    Global networks, which are the primary pillars of the modern manufacturing industry and supply chains, can only cope with the new challenges, requirements and demands when supported by new computing and Internet-based technologies. Cloud Manufacturing: Distributed Computing Technologies for Global and Sustainable Manufacturing introduces a new paradigm for scalable service-oriented sustainable and globally distributed manufacturing systems.   The eleven chapters in this book provide an updated overview of the latest technological development and applications in relevant research areas.  Following an introduction to the essential features of Cloud Computing, chapters cover a range of methods and applications such as the factors that actually affect adoption of the Cloud Computing technology in manufacturing companies and new geometrical simplification method to stream 3-Dimensional design and manufacturing data via the Internet. This is further supported case studies and real life data for Waste Electrical ...

  16. High threshold distributed quantum computing with three-qubit nodes

    International Nuclear Information System (INIS)

    Li Ying; Benjamin, Simon C

    2012-01-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance. (paper)

  17. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  18. A computationally efficient fuzzy control s

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2013-12-01

    Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.

  19. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  20. Distributed Computing and Artificial Intelligence, 12th International Conference

    CERN Document Server

    Malluhi, Qutaibah; Gonzalez, Sara; Bocewicz, Grzegorz; Bucciarelli, Edgardo; Giulioni, Gianfranco; Iqba, Farkhund

    2015-01-01

    The 12th International Symposium on Distributed Computing and Artificial Intelligence 2015 (DCAI 2015) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the Osaka Institute of Technology, Qatar University and the University of Salamanca.

  1. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  2. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  3. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  4. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  5. Secure Computation, I/O-Efficient Algorithms and Distributed Signatures

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas

    2012-01-01

    values of form r, gr for random secret-shared r ∈ ℤq and gr in a group of order q. This costs a constant number of exponentiation per player per value generated, even if less than n/3 players are malicious. This can be used for efficient distributed computing of Schnorr signatures. We further develop...... the technique so we can sign secret data in a distributed fashion at essentially the same cost....

  6. Computing exact bundle compliance control charts via probability generating functions.

    Science.gov (United States)

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  7. Building Trust and Confidentiality in Cloud computing Distributed ...

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... Department of Computer Science, University of Port Harcourt, Rivers State ... considering the security and privacy of the information stored and processed within the cloud. .... protection (perhaps access control), through to.

  8. Computer controlled quality of analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.; Huff, G.A.

    1979-01-01

    A PDP 11/35 computer system is used in evaluating analytical chemistry measurements quality control data at the Barnwell Nuclear Fuel Plant. This computerized measurement quality control system has several features which are not available in manual systems, such as real-time measurement control, computer calculated bias corrections and standard deviation estimates, surveillance applications, evaluaton of measurement system variables, records storage, immediate analyst recertificaton, and the elimination of routine analysis of known bench standards. The effectiveness of the Barnwell computer system has been demonstrated in gathering and assimilating the measurements of over 1100 quality control samples obtained during a recent plant demonstration run. These data were used to determine equaitons for predicting measurement reliability estimates (bias and precision); to evaluate the measurement system; and to provide direction for modification of chemistry methods. The analytical chemistry measurement quality control activities represented 10% of the total analytical chemistry effort

  9. Game-Theoretic Learning in Distributed Control

    KAUST Repository

    Marden, Jason R.

    2018-01-05

    In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.

  10. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  11. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  12. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  13. Large distributed control system using Ada in fusion research

    International Nuclear Information System (INIS)

    Van Arsdall, P J; Woodruff, J P.

    1998-01-01

    Construction of the National Ignition Facility laser at Lawrence Livermore National Laboratory features a distributed control system that uses object-oriented software engineering techniques. Control of 60,000 devices is effected using a network of some 500 computers. The software is being written in Ada and communicates through CORBA. Software controls are implemented in two layers: individual device controllers and a supervisory layer. The software architecture provides services in the form of frameworks that address issues common to event-driven control systems. Those services are allocated to levels that strictly prescribe their interdependency so the levels are separately reusable. The project has completed its final design review. The delivery of the first increment takes place in October 1998. Keywords Distributed control system, object-oriented development, CORBA, application frameworks, levels of abstraction

  14. Computer program for automatic generation of BWR control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsia, M.Y.

    1990-01-01

    A computer program named OCTOPUS has been developed to automatically determine a control rod pattern that approximates some desired target power distribution as closely as possible without violating any thermal safety or reactor criticality constraints. The program OCTOPUS performs a semi-optimization task based on the method of approximation programming (MAP) to develop control rod patterns. The SIMULATE-E code is used to determine the nucleonic characteristics of the reactor core state

  15. PROWAY - a standard for distributed control systems

    International Nuclear Information System (INIS)

    Gellie, R.W.

    1980-01-01

    The availability of cheap and powerful microcomputer and data communications equipment has led to a major revision of instrumentation and control systems. Intelligent devices can now be used and distributed about the control system in a systematic and economic manner. These sub-units are linked by a communications system to provide a total system capable of meeting the required plant objectives. PROWAY, an international standard process data highway for interconnecting processing units in distributed industrial process control systems, is currently being developed. This paper describes the salient features and current status of the PROWAY effort. (auth)

  16. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  17. Computed tomography of surface related radionuclide distributions ('BONN'-tomography)

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.

    1989-01-01

    A method called the 'BONN' tomography is described to produce planar projections of circular activity distributions using standard single photon emission computed tomography. The clinical value of the method is demonstrated for bone scans of the jaw, thorax, and pelvis. Numerical or projection-related problems are discussed. (orig.) [de

  18. Distributed Computing with Centralized Support Works at Brigham Young.

    Science.gov (United States)

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  19. PHENIX On-Line Distributed Computing System Architecture

    International Nuclear Information System (INIS)

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-01-01

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (''granules'') that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes

  20. Distributed adaptive droop control for DC distribution systems

    DEFF Research Database (Denmark)

    Nasirian, Vahidreza; Davoudi, Ali; Lewis, Frank

    2016-01-01

    Summary form only given: A distributed-adaptive droop mechanism is proposed for secondary/primary control of dc microgrids. The conventional secondary control that adjusts the voltage set point for the local droop mechanism is replaced by a voltage regulator. A current regulator is also added...... to fine-tune the droop coefficient for different loading conditions. The voltage regulator uses an observer that processes neighbors' data to estimate the average voltage across the microgrid. This estimation is further used to generate a voltage correction term to adjust the local voltage set point...... with the proposed controller engaged. A low-voltage dc microgrid prototype is used to verify the controller performance, link-failure resiliency, and the plug-and-play capability....

  1. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  2. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  3. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  4. Distributed controller clustering in software defined networks.

    Directory of Open Access Journals (Sweden)

    Ahmed Abdelaziz

    Full Text Available Software Defined Networking (SDN is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN SDN and Open Network Operating System (ONOS controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  5. Distributed controller clustering in software defined networks.

    Science.gov (United States)

    Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond

    2017-01-01

    Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  6. Control of renewable distributed power plants

    OpenAIRE

    Bullich Massagué, Eduard

    2015-01-01

    The main objective of this master thesis is to design a power plant controller for a photo- voltaic (PV) power plant. In a first stage, the current situation of the status of the electrical grid is analysed. The electrical network structure is moving from a conventional system (with centralized power generation, unidirectional power ows, easy control) to a smart grid system consisting on distributed generation, renewable energies, smart and complex control architecture and ...

  7. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  8. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-01-01

    The application of computers to controlled thermonuclear research (CTR) is essential. In the near future the use of computers in the numerical modeling of fusion systems should increase substantially. A recent panel has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies is called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. To meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR Laboratories by a communication network. The crucial element needed for success is trained personnel. The number of people with knowledge of plasma science and engineering trained in numerical methods and computer science must be increased substantially in the next few years. Nuclear engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing

  9. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-02-01

    The role of Nuclear Engineering Education in the application of computers to controlled fusion research can be a very important one. In the near future the use of computers in the numerical modelling of fusion systems should increase substantially. A recent study group has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. In order to meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR laboratories by a communications network. The crucial element that is needed for success is trained personnel. The number of people with knowledge of plasma science and engineering that are trained in numerical methods and computer science is quite small, and must be increased substantially in the next few years. Nuclear Engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing. (U.S.)

  10. Above the cloud computing orbital services distributed data model

    Science.gov (United States)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  11. Distributed interactive graphics applications in computational fluid dynamics

    International Nuclear Information System (INIS)

    Rogers, S.E.; Buning, P.G.; Merritt, F.J.

    1987-01-01

    Implementation of two distributed graphics programs used in computational fluid dynamics is discussed. Both programs are interactive in nature. They run on a CRAY-2 supercomputer and use a Silicon Graphics Iris workstation as the front-end machine. The hardware and supporting software are from the Numerical Aerodynamic Simulation project. The supercomputer does all numerically intensive work and the workstation, as the front-end machine, allows the user to perform real-time interactive transformations on the displayed data. The first program was written as a distributed program that computes particle traces for fluid flow solutions existing on the supercomputer. The second is an older post-processing and plotting program modified to run in a distributed mode. Both programs have realized a large increase in speed over that obtained using a single machine. By using these programs, one can learn quickly about complex features of a three-dimensional flow field. Some color results are presented

  12. Computer science approach to quantum control

    International Nuclear Information System (INIS)

    Janzing, D.

    2006-01-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  13. Upgrade plan for HANARO control computer system

    International Nuclear Information System (INIS)

    Kim, Min Jin; Kim, Young Ki; Jung, Hwan Sung; Choi, Young San; Woo, Jong Sub; Jun, Byung Jin

    2001-01-01

    A microprocessor based digital control system, the Multi-Loop Controller (MLC), which was chosen to control HANARO, was introduced to the market in early '80s and it had been used to control petrochemical plant, paper mill and Slowpoke reactor in Canada. Due to the development in computer technology, it has become so outdated model and the production of this model was discontinued a few years ago. Hence difficulty in acquiring the spare parts is expected. To achieve stable reactor control during its lifetime and to avoid possible technical dependency to the manufacturer, a long-term replacement plan for HANARO control computer system is on its way. The plan will include a few steps in its process. This paper briefly introduces the methods of implementation of the process and discusses the engineering activities of the plan

  14. Safety analysis of control rod drive computers

    International Nuclear Information System (INIS)

    Ehrenberger, W.; Rauch, G.; Schmeil, U.; Maertz, J.; Mainka, E.U.; Nordland, O.; Gloee, G.

    1985-01-01

    The analysis of the most significant user programmes revealed no errors in these programmes. The evaluation of approximately 82 cumulated years of operation demonstrated that the operating system of the control rod positioning processor has a reliability that is sufficiently good for the tasks this computer has to fulfil. Computers can be used for safety relevant tasks. The experience gained with the control rod positioning processor confirms that computers are not less reliable than conventional instrumentation and control system for comparable tasks. The examination and evaluation of computers for safety relevant tasks can be done with programme analysis or statistical evaluation of the operating experience. Programme analysis is recommended for seldom used and well structured programmes. For programmes with a long, cumulated operating time a statistical evaluation is more advisable. The effort for examination and evaluation is not greater than the corresponding effort for conventional instrumentation and control systems. This project has also revealed that, where it is technologically sensible, process controlling computers or microprocessors can be qualified for safety relevant tasks without undue effort. (orig./HP) [de

  15. Control of peripheral units by satellite computer

    International Nuclear Information System (INIS)

    Tran, K.T.

    1974-01-01

    A computer system was developed allowing the control of nuclear physics experiments, and use of the results by means of graphical and conversational assemblies. This system which is made of two computers, one IBM-370/135 and one Telemecanique Electrique T1600, controls the conventional IBM peripherals and also the special ones made in the laboratory, such as data acquisition display and graphics units. The visual display is implemented by a scanning-type television, equipped with a light-pen. These units in themselves are universal, but their specifications were established to meet the requirements of nuclear physics experiments. The input-output channels of the two computers have been connected together by an interface, designed and implemented in the Laboratory. This interface allows the exchange of control signals and data (the data are changed from bytes into word and vice-versa). The T1600 controls the peripherals mentionned above according to the commands of the IBM370. Hence the T1600 has here the part of a satellite computer which allows conversation with the main computer and also insures the control of its special peripheral units [fr

  16. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  17. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  18. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  19. Robot-Arm Dynamic Control by Computer

    Science.gov (United States)

    Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.

    1987-01-01

    Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.

  20. Distributed control system for parallel-connected DC boost converters

    Science.gov (United States)

    Goldsmith, Steven

    2017-08-15

    The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.

  1. Configurating computer-controlled bar system

    OpenAIRE

    Šuštaršič, Nejc

    2010-01-01

    The principal goal of my diploma thesis is creating an application for configurating computer-controlled beverages dispensing systems. In the preamble of my thesis I present the theoretical platform for point of sale systems and beverages dispensing systems, which are required for the understanding of the target problematics. As with many other fields, computer tehnologies entered the field of managing bars and restaurants quite some time ago. Basic components of every bar or restaurant a...

  2. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  4. Feedback brake distribution control for minimum pitch

    Science.gov (United States)

    Tavernini, Davide; Velenis, Efstathios; Longo, Stefano

    2017-06-01

    The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.

  5. Guide to cloud computing for business and technology managers from distributed computing to cloudware applications

    CERN Document Server

    Kale, Vivek

    2014-01-01

    Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications unravels the mystery of cloud computing and explains how it can transform the operating contexts of business enterprises. It provides a clear understanding of what cloud computing really means, what it can do, and when it is practical to use. Addressing the primary management and operation concerns of cloudware, including performance, measurement, monitoring, and security, this pragmatic book:Introduces the enterprise applications integration (EAI) solutions that were a first ste

  6. A wireless computational platform for distributed computing based traffic monitoring involving mixed Eulerian-Lagrangian sensing

    KAUST Repository

    Jiang, Jiming

    2013-06-01

    This paper presents a new wireless platform designed for an integrated traffic monitoring system based on combined Lagrangian (mobile) and Eulerian (fixed) sensing. The sensor platform is built around a 32-bit ARM Cortex M4 micro-controller and a 2.4GHz 802.15.4 ISM compliant radio module, and can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. The platform is specially designed and optimized to be integrated in a solar-powered wireless sensor network in which traffic flow maps are computed by the nodes directly using distributed computing. A MPPT circuitry is proposed to increase the power output of the attached solar panel. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debug. An ongoing implementation is briefly discussed, and compared with existing platforms used in wireless sensor networks. © 2013 IEEE.

  7. Computer-aided control system design

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.

    1986-01-01

    Control systems are typically implemented using conventional PID controllers, which are then tuned manually during plant commissioning to compensate for interactions between feedback loops. As plants increase in size and complexity, such controllers can fail to provide adequate process regulations. Multivariable methods can be utilized to overcome these limitations. At the Chalk River Nuclear Laboratories, modern control systems are designed and analyzed with the aid of MVPACK, a system of computer programs that appears to the user like a high-level calculator. The software package solves complicated control problems, and provides useful insight into the dynamic response and stability of multivariable systems

  8. Distributed Adaptive Droop Control for DC Distribution Systems

    DEFF Research Database (Denmark)

    Nasirian, Vahidreza; Davoudi, Ali; Lewis, Frank

    2014-01-01

    A distributed-adaptive droop mechanism is proposed for secondary/primary control of dc Microgrids. The conventional secondary control, that adjusts the voltage set point for the local droop mechanism, is replaced by a voltage regulator. A current regulator is then added to fine-tune the droop...... coefficient for different loading conditions. The voltage regulator uses an observer that processes neighbors’ data to estimate the average voltage across the Microgrid. This estimation is further used to generate a voltage correction term to adjust the local voltage set point. The current regulator compares...... engaged. A low-voltage dc Microgrid prototype is used to verify the controller performance, link-failure resiliency, and the plug-andplay capabilities....

  9. Computer-controlled 3-D treatment delivery

    International Nuclear Information System (INIS)

    Fraass, Benedick A.

    1995-01-01

    Purpose/Objective: This course will describe the use of computer-controlled treatment delivery techniques for treatment of patients with sophisticated conformal therapy. In particular, research and implementation issues related to clinical use of computer-controlled conformal radiation therapy (CCRT) techniques will be discussed. The possible/potential advantages of CCRT techniques will be highlighted using results from clinical 3-D planning studies. Materials and Methods: In recent years, 3-D treatment planning has been used to develop and implement 3-D conformal therapy treatment techniques, and studies based on these conformal treatments have begun to show the promise of conformal therapy. This work has been followed by the development of commercially-available multileaf collimator and computer control systems for treatment machines. Using these (and other) CCRT devices, various centers are beginning to clinically use complex computer-controlled treatments. Both research and clinical CCRT treatment techniques will be discussed in this presentation. General concepts and requirements for CCRT will be mentioned. Developmental and clinical experience with CCRT techniques from a number of centers will be utilized. Results: Treatment planning, treatment preparation and treatment delivery must be approached in an integrated fashion in order to clinically implement CCRT treatment techniques, and the entire process will be discussed. Various CCRT treatment methodologies will be reviewed from operational, dosimetric, and technical points of view. The discussion will concentrate on CCRT techniques which are likely to see rather wide dissemination over the next several years, including particularly the use of multileaf collimators (MLC), dynamic and segmental conformal therapy, conformal field shaping, and other related techniques. More advanced CCRT techniques, such as the use of individualized intensity modulation of beams or segments, and the use of computer-controlled

  10. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. HEP@Home - A distributed computing system based on BOINC

    CERN Document Server

    Amorim, A; Andrade, P; Amorim, Antonio; Villate, Jaime; Andrade, Pedro

    2005-01-01

    Project SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process large volumes of data using a vast amount of distributed computer power. To extend the generic usage of this kind of distributed computing tools, BOINC is being developed. In this paper we propose HEP@HOME, a BOINC version tailored to the specific requirements of the High Energy Physics (HEP) community. The HEP@HOME will be able to process large amounts of data using virtually unlimited computing power, as BOINC does, and it should be able to work according to HEP specifications. In HEP the amounts of data to be analyzed or reconstructed are of central importance. Therefore, one of the design principles of this tool is to avoid data transfer. This will allow scientists to run their analysis applications and taking advantage of a large number of CPUs. This tool also satisfies other important requirements in HEP, namely, security, fault-tolerance an...

  13. Cryptographically Secure Multiparty Computation and Distributed Auctions Using Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Anunay Kulshrestha

    2017-12-01

    Full Text Available We introduce a robust framework that allows for cryptographically secure multiparty computations, such as distributed private value auctions. The security is guaranteed by two-sided authentication of all network connections, homomorphically encrypted bids, and the publication of zero-knowledge proofs of every computation. This also allows a non-participant verifier to verify the result of any such computation using only the information broadcasted on the network by each individual bidder. Building on previous work on such systems, we design and implement an extensible framework that puts the described ideas to practice. Apart from the actual implementation of the framework, our biggest contribution is the level of protection we are able to guarantee from attacks described in previous work. In order to provide guidance to users of the library, we analyze the use of zero knowledge proofs in ensuring the correct behavior of each node in a computation. We also describe the usage of the library to perform a private-value distributed auction, as well as the other challenges in implementing the protocol, such as auction registration and certificate distribution. Finally, we provide performance statistics on our implementation of the auction.

  14. Improving flow distribution in influent channels using computational fluid dynamics.

    Science.gov (United States)

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  15. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  16. Computer control system of TARN-2

    International Nuclear Information System (INIS)

    Watanabe, S.

    1989-01-01

    The CAMAC interface system is employed in order to regulate the power supply, beam diagnostic and so on. Five CAMAC stations are located in the TARN-2 area and are linked with a serial highway system. The CAMAC serial highway is driven by a serial highway driver, Kinetic 3992, which is housed in the CAMAC powered crate and regulated by two successive methods. One is regulated by the mini computer through the standard branch-highway crate controller, named Type-A2, and the other is regulated with the microcomputer through the auxiliary crate controller. The CAMAC serial highway comprises the two-way optical cables with a total length of 300 m. Each CAMAC station has the serial and auxiliary crate controllers so as to realize alternative control with the local computer system. Interpreter, INSBASIC, is used in the main control computer. There are many kinds of the 'device control function' of the INSBASIC. Because the 'device control function' implies physical operating procedure of such a device, only knowledge of the logical operating procedure is required. A touch panel system is employed to regulate the complicated control flow without any knowledge of the usage of the device. A rotary encoder system, which is analogous to the potentiometer operation, is also available for smooth adjustment of the setting parameter. (author)

  17. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  18. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  19. Computed tomography-controlled stereotactic surgery

    International Nuclear Information System (INIS)

    Matsumoto, Keizo; Shichijo, Fumio; Gyoten, Tetsuya; Tomida, Keisuke; Miyake, Hajime

    1986-01-01

    A single use of coordinate system of computed tomography (CT) scanner is utilized for CT-controlled stereotactic surgery. Depth, direction and readjustment of target trajectory were defined by known values of cursor number in CT images and numbers of the sliding table indicator. We loaded calculation formulas into hand held computer to obtain immediate answers. Stereotactic apparatus consisted two main parts: the patient's head fixation and probe holder. Surgery was performed in cases of hypertensive intracerebral hemorrhage for evacuation of the hematomas successfully. Target accuracy was satisfactory. With further advance of this surgery, automatic stereotactic control with a special robot machine seeing possible. (author)

  20. Pulmonary blood flow distribution measured by radionuclide computed tomography

    International Nuclear Information System (INIS)

    Maeda, H.; Itoh, H.; Ishii, Y.

    1982-01-01

    Distributions of pulmonary blood flow per unit lung volume were measured in sitting patients with a radionuclide computed tomography (RCT) by intravenously administered Tc-99m macroaggregates of human serum albumin (MAA). Four different types of distribution were distinguished, among which a group referred as type 2 had a three zonal blood flow distribution as previously reported (West and co-workers, 1964). The pulmonary arterial pressure (Pa) and the venous pressure (Pv) were determined in this group of distribution. These values showed satifactory agreements with the pulmonary artery pressure (Par) and the capillary wedged pressure (Pcw) measured by Swan-Ganz catheter in eighteen supine patients. Those good correlations enable to establish a noninvasive methodology for measurement of pulmonary vascular pressures

  1. Distributed Scheme to Authenticate Data Storage Security in Cloud Computing

    OpenAIRE

    B. Rakesh; K. Lalitha; M. Ismail; H. Parveen Sultana

    2017-01-01

    Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which h...

  2. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  3. Dealing with distributed intelligence in monitoring and control systems

    International Nuclear Information System (INIS)

    McLaren, R.A.

    1981-01-01

    The Euorpean Hybrid Spectrometer is built up of many individual detectors, each having widely varying monitoring and control requirements. With the advent of cheap microprocessor systems a shift from the concept of a single monitoring and control computer of that of distributed intelligent controllers has been economically feasible. A detector designer can now thoroughly test and debug a complete monitoring and control system on a local, dedicated micro-computer, while during operation, the central computer can be relieved of many simple repetitive tasks. Rapidly, however, it has become obvious that the designers of these systems have to take into account the final operational environment and build into both the hardware and software, features allowing easy integration into a central monitoring and control chain. In addition, the problems of maintenance and enventual modification have to be taken into consideration early in the development. Examples of currently operational systems will be briefly described to demonstrate how a set of basic guidelines plus standardisation of hardware/software can minimise the problems of integration and maintenance. Based on practical experience gained in the European Hybrid Spectrometer, investigations are proceeding on various possible alternatives for future micro-computer based monitoring and control systems. (orig.)

  4. Energy Efficiency of Distributed Environmental Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Khalifa, H. Ezzat; Isik, Can; Dannenhoffer, John F. III

    2011-02-23

    In this report, we present an analytical evaluation of the potential of occupant-regulated distributed environmental control systems (DECS) to enhance individual occupant thermal comfort in an office building with no increase, and possibly even a decrease in annual energy consumption. To this end we developed and applied several analytical models that allowed us to optimize comfort and energy consumption in partitioned office buildings equipped with either conventional central HVAC systems or occupant-regulated DECS. Our approach involved the following interrelated components: 1. Development of a simplified lumped-parameter thermal circuit model to compute the annual energy consumption. This was necessitated by the need to perform tens of thousands of optimization calculations involving different US climatic regions, and different occupant thermal preferences of a population of ~50 office occupants. Yearly transient simulations using TRNSYS, a time-dependent building energy modeling program, were run to determine the robustness of the simplified approach against time-dependent simulations. The simplified model predicts yearly energy consumption within approximately 0.6% of an equivalent transient simulation. Simulations of building energy usage were run for a wide variety of climatic regions and control scenarios, including traditional “one-size-fits-all” (OSFA) control; providing a uniform temperature to the entire building, and occupant-selected “have-it-your-way” (HIYW) control with a thermostat at each workstation. The thermal model shows that, un-optimized, DECS would lead to an increase in building energy consumption between 3-16% compared to the conventional approach depending on the climate regional and personal preferences of building occupants. Variations in building shape had little impact in the relative energy usage. 2. Development of a gradient-based optimization method to minimize energy consumption of DECS while keeping each occupant

  5. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  6. On the computation of momentum distributions within wavepacket propagation calculations

    International Nuclear Information System (INIS)

    Feuerstein, Bernold; Thumm, Uwe

    2003-01-01

    We present a new method to extract momentum distributions from time-dependent wavepacket calculations. In contrast to established Fourier transformation of the spatial wavepacket at a fixed time, the proposed 'virtual detector' method examines the time dependence of the wavepacket at a fixed position. In first applications to the ionization of model atoms and the dissociation of H 2 + , we find a significant reduction of computing time and are able to extract reliable fragment momentum distributions by using a comparatively small spatial numerical grid for the time-dependent wavefunction

  7. Radar data processing using a distributed computational system

    Science.gov (United States)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  8. Coordinated Voltage Control of Active Distribution Network

    Directory of Open Access Journals (Sweden)

    Xie Jiang

    2016-01-01

    Full Text Available This paper presents a centralized coordinated voltage control method for active distribution network to solve off-limit problem of voltage after incorporation of distributed generation (DG. The proposed method consists of two parts, it coordinated primal-dual interior point method-based voltage regulation schemes of DG reactive powers and capacitors with centralized on-load tap changer (OLTC controlling method which utilizes system’s maximum and minimum voltages, to improve the qualified rate of voltage and reduce the operation numbers of OLTC. The proposed coordination has considered the cost of capacitors. The method is tested using a radial edited IEEE-33 nodes distribution network which is modelled using MATLAB.

  9. Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems

    DEFF Research Database (Denmark)

    Morstyn, Thomas; Hredzak, Branislav; Aguilera, Ricardo P.

    2018-01-01

    , and converter current constraints to be addressed. In addition, nonlinear variations in the charge and discharge efficiencies of lithium ion batteries are analyzed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical......This brief proposes a new convex model predictive control (MPC) strategy for dynamic optimal power flow between battery energy storage (ES) systems distributed in an ac microgrid. The proposed control strategy uses a new problem formulation, based on a linear $d$ – $q$ reference frame voltage...... feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real...

  10. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  11. Characteristics of the TRISTAN control computer network

    International Nuclear Information System (INIS)

    Kurokawa, Shinichi; Akiyama, Atsuyoshi; Katoh, Tadahiko; Kikutani, Eiji; Koiso, Haruyo; Oide, Katsunobu; Shinomoto, Manabu; Kurihara, Michio; Abe, Kenichi

    1986-01-01

    Twenty-four minicomputers forming an N-to-N token-ring network control the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on NODAL, a multicomputer interpretive language developed at the CERN SPS. The high-level services offered to the users of the network are remote execution by the EXEC, EXEC-P and IMEX commands of NODAL and uniform file access throughout the system. The network software was designed to achieve the fast response of the EXEC command. The performance of the network is also reported. Tasks that overload the minicomputers are processed on the KEK central computers. One minicomputer in the network serves as a gateway to KEKNET, which connects the minicomputer network and the central computers. The communication with the central computers is managed within the framework of the KEK NODAL system. NODAL programs communicate with the central computers calling NODAL functions; functions for exchanging data between a data set on the central computers and a NODAL variable, submitting a batch job to the central computers, checking the status of the submitted job, etc. are prepared. (orig.)

  12. Distributed control network for optogenetic experiments

    Science.gov (United States)

    Kasprowicz, G.; Juszczyk, B.; Mankiewicz, L.

    2014-11-01

    Nowadays optogenetic experiments are constructed to examine social behavioural relations in groups of animals. A novel concept of implantable device with distributed control network and advanced positioning capabilities is proposed. It is based on wireless energy transfer technology, micro-power radio interface and advanced signal processing.

  13. 11th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Bersini, Hugues; Corchado, Juan; Rodríguez, Sara; Pawlewski, Paweł; Bucciarelli, Edgardo

    2014-01-01

    The 11th International Symposium on Distributed Computing and Artificial Intelligence 2014 (DCAI 2014) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This year’s technical program presents both high quality and diversity, with contributions in well-established and evolving areas of research (Algeria, Brazil, China, Croatia, Czech Republic, Denmark, France, Germany, Ireland, Italy, Japan, Malaysia, Mexico, Poland, Portugal, Republic of Korea, Spain, Taiwan, Tunisia, Ukraine, United Kingdom), representing ...

  14. The BaBar experiment's distributed computing model

    International Nuclear Information System (INIS)

    Boutigny, D.

    2001-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multitier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT format and later in Objectivity format. GRID tools will be used for remote job submission

  15. The BaBar Experiment's Distributed Computing Model

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multi-tier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT[1] format and later in Objectivity[2] format. GRID tools will be used for remote job submission

  16. SWITCHING POWER FAN CONTROL OF COMPUTER

    Directory of Open Access Journals (Sweden)

    Oleksandr I. Popovskyi

    2010-10-01

    Full Text Available Relevance of material presented in the article, due to extensive use of high-performance computers to create modern information systems, including the NAPS of Ukraine. Most computers in NAPS of Ukraine work on Intel Pentium processors at speeds from 600 MHz to 3 GHz and release a lot of heat, which requires the installation of the system unit 2-3 additional fans. The fan is always works on full power, that leads to rapid deterioration and high level (up to 50 dB noise. In order to meet ergonomic requirements it is proposed to іnstall a computer system unit and an additional control unit ventilators, allowing independent control of each fan. The solution is applied at creation of information systems planning research in the National Academy of Pedagogical Sciences of Ukraine on Internet basis.

  17. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  18. Automatic control of commercial computer programs

    International Nuclear Information System (INIS)

    Rezvov, B.A.; Artem'ev, A.N.; Maevskij, A.G.; Demkiv, A.A.; Kirillov, B.F.; Belyaev, A.D.; Artem'ev, N.A.

    2010-01-01

    The way of automatic control of commercial computer programs is presented. The developed connection of the EXAFS spectrometer automatic system (which is managed by PC for DOS) is taken with the commercial program for the CCD detector control (which is managed by PC for Windows). The described complex system is used for the automation of intermediate amplitude spectra processing in EXAFS spectrum measurements at Kurchatov SR source

  19. Computer networks in future accelerator control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1977-03-01

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  20. Software Quality Measurement for Distributed Systems. Volume 3. Distributed Computing Systems: Impact on Software Quality.

    Science.gov (United States)

    1983-07-01

    Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video

  1. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  2. Distributed user interfaces for clinical ubiquitous computing applications.

    Science.gov (United States)

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  3. Computationally intensive econometrics using a distributed matrix-programming language.

    Science.gov (United States)

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  4. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  5. Distributed control and instrumentation systems for future nuclear power plants

    International Nuclear Information System (INIS)

    Yan, G.; L'Archeveque, J.V.R.

    1976-01-01

    The centralized dual computer system philosophy has evolved as the key concept underlying the highly successful application of direct digital control in CANDU power reactors. After more than a decade, this basis philosophy bears re-examination in the light of advances in system concepts--notably distributed architectures. A number of related experimental programs, all aimed at exploring the prospects of applying distributed systems in Canadian nuclear power plants are discussed. It was realized from the outset that the successful application of distributed systems depends on the availability of a highly reliable, high capacity, low cost communications medium. Accordingly, an experimental facility has been established and experiments have been defined to address such problem areas as interprocess communications, distributed data base design and man/machine interfaces. The design of a first application to be installed at the NRU/NRX research reactors is progressing well

  6. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    International Nuclear Information System (INIS)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-01-01

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: → Use of e-science methods to search configurationally space. → Automated control of space searching. → Identify key structural features conveying stability. → Improved correlation of computed structures with experimental data.

  7. CANDU Digital Control Computer upgrade options

    International Nuclear Information System (INIS)

    De Jong, M.S.; De Grosbois, J.; Qian, T.

    1997-01-01

    This paper reviews the evolution of Digital Control Computers (DCC) in CANDU power plants to the present day. Much of this evolution has been to meeting changing control or display requirements as well as the replacement of obsolete, or old and less reliable technology with better equipment that is now available. The current work at AECL and Canadian utilities to investigate DCC upgrade options, alternatives, and strategies are examined. The dependence of a particular upgrade strategy on the overall plant refurbishment plans are also discussed. Presently, the upgrade options range from replacement of individual obsolete system components, to replacement of the entire DCC hardware without changing the software, to complete replacement of the DCCs with a functionally equivalent system using new control computer equipment and software. Key issues, constraints and objectives associated with these DCC upgrade options are highlighted. (author)

  8. A computer-controlled conformal radiotherapy system. IV: Electronic chart

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; McShan, Daniel L.; Matrone, Gwynne M.; Weaver, Tamar A.; Lewis, James D.; Kessler, Marc L.

    1995-01-01

    Purpose: The design and implementation of a system for electronically tracking relevant plan, prescription, and treatment data for computer-controlled conformal radiation therapy is described. Methods and Materials: The electronic charting system is implemented on a computer cluster coupled by high-speed networks to computer-controlled therapy machines. A methodical approach to the specification and design of an integrated solution has been used in developing the system. The electronic chart system is designed to allow identification and access of patient-specific data including treatment-planning data, treatment prescription information, and charting of doses. An in-house developed database system is used to provide an integrated approach to the database requirements of the design. A hierarchy of databases is used for both centralization and distribution of the treatment data for specific treatment machines. Results: The basic electronic database system has been implemented and has been in use since July 1993. The system has been used to download and manage treatment data on all patients treated on our first fully computer-controlled treatment machine. To date, electronic dose charting functions have not been fully implemented clinically, requiring the continued use of paper charting for dose tracking. Conclusions: The routine clinical application of complex computer-controlled conformal treatment procedures requires the management of large quantities of information for describing and tracking treatments. An integrated and comprehensive approach to this problem has led to a full electronic chart for conformal radiation therapy treatments

  9. Brian Carpenter at the PS control computer

    CERN Multimedia

    vmo; CERN PhotoLab

    1971-01-01

    Brian E. Carpenter has been Group Leader of the Communications Systems group at CERN since 1985, following ten years' experience in software for process control systems at CERN, which was interrupted by three years teaching undergraduate computer science at Massey University in New Zealand. He holds a first degree in physics and a Ph.D. in computer science, and is an M.I.E.E. He is Chair of the Internet Architecture Board and an active participant in the Internet Engineering Task Force.

  10. An ATLAS distributed computing architecture for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for LHC Run-4. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prot...

  11. On the relevancy of efficient, integrated computer and network monitoring in HEP distributed online environment

    International Nuclear Information System (INIS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Javello, J.; Miere, Y.; Ruffinoni, D.; Albert, J.N.; Bellas, N.; Smith, G.

    1996-01-01

    Large Scientific Equipment are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System. (author)

  12. Computer controls for the WITCH experiment

    Czech Academy of Sciences Publication Activity Database

    Tandecki, M.; Beck, M.; Beck, D.; Brand, H.; Breitenfeldt, M.; De Leebeeck, V.; Friedag, P.; Herlert, A.; Kozlov, V.; Mader, J.; Roccia, S.; Soti, G.; Traykov, E.; Van Gorp, S.; Wauters, F.; Weinheimer, C.; Zákoucký, Dalibor; Severijns, N.

    2011-01-01

    Roč. 629, č. 1 (2011), s. 369-405 ISSN 0168-9002 R&D Projects: GA MŠk LA08015 Institutional research plan: CEZ:AV0Z10480505; CEZ:AV0Z10100523 Keywords : LabVIEW * Control system * Distributed programming Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.207, year: 2011

  13. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  14. The Ganil computer control system renewal

    International Nuclear Information System (INIS)

    David, L.; Lecorche, E.; Luong, T.T.; Ulrich, M.

    1990-01-01

    Since 1982 the GANIL heavy ion accelerator has been under the control of 16-bit minicomputers MITRA, programmable logic controllers and microprocessorized Camac controllers, structured into a partially centralized system. This control system has to be renewed to meet the increasing demands of the accelerator operation which aims to provide higher quality ion beams under more reliable conditions. This paper gives a brief description of the existing control system and then discusses the main issues of the design and the implementation of the future control system: distributed powerful processors federated through Ethernet and flexible network-wide database access, VME standard and front-end microprocessors, enhanced color graphic tools and workstation based operator interface

  15. Distributed traffic signal control using fuzzy logic

    Science.gov (United States)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  16. Method and system for redundancy management of distributed and recoverable digital control system

    Science.gov (United States)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2012-01-01

    A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.

  17. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  18. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    Science.gov (United States)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  19. Distributed medium access control in wireless networks

    CERN Document Server

    Wang, Ping

    2013-01-01

    This brief investigates distributed medium access control (MAC) with QoS provisioning for both single- and multi-hop wireless networks including wireless local area networks (WLANs), wireless ad hoc networks, and wireless mesh networks. For WLANs, an efficient MAC scheme and a call admission control algorithm are presented to provide guaranteed QoS for voice traffic and, at the same time, increase the voice capacity significantly compared with the current WLAN standard. In addition, a novel token-based scheduling scheme is proposed to provide great flexibility and facility to the network servi

  20. Distributed control in the electricity infrastructure

    International Nuclear Information System (INIS)

    Kok, J.K.; Warmer, C.; Kamphuis, I.G.; Mellstrand, P.; Gustavsson, R.

    2006-01-01

    Different driving forces push the electricity production towards decentralization. As a result, the current electricity infrastructure is expected to evolve into a network of networks, in which all system parts communicate with each other and influence each other. Multiagent systems and electronic markets form an appropriate technology needed for control and coordination tasks in the future electricity network. We present the PowerMatcher, a market-based control concept for supply demand matching (SDM) in electricity networks. In a simulation study we show the ability of this approach to raise the simultaneousness of electricity production and consumption within (local) control clusters. This control concept can be applied in different business cases like reduction of imbalance costs in commercial portfolios or virtual power plant operation of distributed generators. Two PowerMatcher-based field test configurations are described, one currently in operation, one currently under construction

  1. Integrated Computer Controlled Glow Discharge Tube

    Science.gov (United States)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  2. Intelligent distributed control for nuclear power plants

    International Nuclear Information System (INIS)

    Klevans, E.H.

    1992-01-01

    This project was initiated in September 1989 as a three year project to develop and demonstrate Intelligent Distributed Control (IDC) for Nuclear Power Plants. The body of this Third Annual Technical Progress report summarizes the period from September 1991 to October 1992. There were two primary goals of this research project. The first goal was to combine diagnostics and control to achieve a highly automated power plant as described by M.A. Schultz. His philosophy, is to improve public perception of the safety of nuclear power plants by incorporating a high degree of automation where a greatly simplified operator control console minimizes the possibility of human error in power plant operations. To achieve this goal, a hierarchically distributed control system with automated responses to plant upset conditions was pursued in this research. The second goal was to apply this research to develop a prototype demonstration on an actual power plant system, the EBR-2 stem plant. Emphasized in this Third Annual Technical Progress Report is the continuing development of the in-plant intelligent control demonstration for the final project milestone and includes: simulation validation and the initial approach to experiment formulation

  3. Fault tolerant computer control for a Maglev transportation system

    Science.gov (United States)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  4. Tool set for distributed real-time machine control

    Science.gov (United States)

    Carrott, Andrew J.; Wright, Christopher D.; West, Andrew A.; Harrison, Robert; Weston, Richard H.

    1997-01-01

    Demands for increased control capabilities require next generation manufacturing machines to comprise intelligent building elements, physically located at the point where the control functionality is required. Networks of modular intelligent controllers are increasingly designed into manufacturing machines and usable standards are slowly emerging. To implement a control system using off-the-shelf intelligent devices from multi-vendor sources requires a number of well defined activities, including (a) the specification and selection of interoperable control system components, (b) device independent application programming and (c) device configuration, management, monitoring and control. This paper briefly discusses the support for the above machine lifecycle activities through the development of an integrated computing environment populated with an extendable software toolset. The toolset supports machine builder activities such as initial control logic specification, logic analysis, machine modeling, mechanical verification, application programming, automatic code generation, simulation/test, version control, distributed run-time support and documentation. The environment itself consists of system management tools and a distributed object-oriented database which provides storage for the outputs from machine lifecycle activities and specific target control solutions.

  5. Future Computer, Communication, Control and Automation

    CERN Document Server

    2011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume topics covered include wireless communications, advances in wireless video, wireless sensors networking, security in wireless networks, network measurement and management, hybrid and discrete-event systems, internet analytics and automation, robotic system and applications, reconfigurable automation systems, machine vision in automation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  6. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  7. A Computer-Controlled Laser Bore Scanner

    Science.gov (United States)

    Cheng, Charles C.

    1980-08-01

    This paper describes the design and engineering of a laser scanning system for production applications. The laser scanning techniques, the timing control, the logic design of the pattern recognition subsystem, the digital computer servo control for the loading and un-loading of parts, and the laser probe rotation and its synchronization will be discussed. The laser inspection machine is designed to automatically inspect the surface of precision-bored holes, such as those in automobile master cylinders, without contacting the machined surface. Although the controls are relatively sophisticated, operation of the laser inspection machine is simple. A laser light beam from a commercially available gas laser, directed through a probe, scans the entire surface of the bore. Reflected light, picked up through optics by photoelectric sensors, generates signals that are fed to a mini-computer for processing. A pattern recognition techniques program in the computer determines acceptance or rejection of the part being inspected. The system's acceptance specifications are adjustable and are set to the user's established tolerances. However, the computer-controlled laser system is capable of defining from 10 to 75 rms surface finish, and voids or flaws from 0.0005 to 0.020 inch. Following the successful demonstration with an engineering prototype, the described laser machine has proved its capability to consistently ensure high-quality master brake cylinders. It thus provides a safety improvement for the automotive braking system. Flawless, smooth cylinder bores eliminate premature wearing of the rubber seals, resulting in a longer-lasting master brake cylinder and a safer and more reliable automobile. The results obtained from use of this system, which has been in operation about a year for replacement of a tedious, manual operation on one of the high-volume lines at the Bendix Hydraulics Division, have been very satisfactory.

  8. Ride control of surface effect ships using distributed control

    Directory of Open Access Journals (Sweden)

    Asgeir J. Sørensen

    1994-04-01

    Full Text Available A ride control system for active damping of heave and pitch accelerations of Surface Effect Ships (SES is presented. It is demonstrated that distributed effects that are due to a spatially varying pressure in the air cushion result in significant vertical vibrations in low and moderate sea states. In order to achieve a high quality human comfort and crew workability it is necessary to reduce these vibrations using a control system which accounts for distributed effects due to spatial pressure variations in the air cushion. A mathematical model of the process is presented, and collocated sensor and actuator pairs are used. The process stability is ensured using a controller with appropriate passivity properties. Sensor and actuator location is also discussed. The performance of the ride control system is shown by power spectra of the vertical accelerations obtained from full scale experiments with a 35 m SES.

  9. Computational scheme for transient temperature distribution in PWR vessel wall

    International Nuclear Information System (INIS)

    Dedovic, S.; Ristic, P.

    1980-01-01

    Computer code TEMPNES is a part of joint effort made in Gosa Industries in achieving the technique for structural analysis of heavy pressure vessels. Transient heat conduction problems analysis is based on finite element discretization of structures non-linear transient matrix formulation and time integration scheme as developed by Wilson (step-by-step procedure). Convection boundary conditions and the effect of heat generation due to radioactive radiation are both considered. The computation of transient temperature distributions in reactor vessel wall when the water temperature suddenly drops as a consequence of reactor cooling pump failure is presented. The vessel is treated as as axisymmetric body of revolution. The program has two finite time element options a) fixed predetermined increment and; b) an automatically optimized time increment for each step dependent on the rate of change of the nodal temperatures. (author)

  10. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  11. Computer control of rf at SLAC

    International Nuclear Information System (INIS)

    Schwarz, H.D.

    1985-03-01

    The Stanford Linear Accelerator is presently upgraded for the SLAC Linear Collider project. The energy is to be increased from approximately 31 GeV to 50 GeV. Two electron beams and one positron beam are to be accelerated with high demands on the quality of the beams. The beam specifications are shown. To meet these specifications, all parameters influencing the beams have to be under tight control and continuous surveillance. This task is accomplished by a new computer system implemented at SLAC which has, among many other functions, control over rf accelerating fields. 13 refs., 8 figs., 2 tabs

  12. Universal dephasing control during quantum computation

    International Nuclear Information System (INIS)

    Gordon, Goren; Kurizki, Gershon

    2007-01-01

    Dephasing is a ubiquitous phenomenon that leads to the loss of coherence in quantum systems and the corruption of quantum information. We present a universal dynamical control approach to combat dephasing during all stages of quantum computation, namely, storage and single- and two-qubit operators. We show that (a) tailoring multifrequency gate pulses to the dephasing dynamics can increase fidelity; (b) cross-dephasing, introduced by entanglement, can be eliminated by appropriate control fields; (c) counterintuitively and contrary to previous schemes, one can increase the gate duration, while simultaneously increasing the total gate fidelity

  13. Interaction and control in wearable computing

    International Nuclear Information System (INIS)

    Strand, Ole Morten; Johansen, Paal; Droeivoldsmo, Asgeir; Reigstad, Magnus; Olsen, Asle; Helgar, Stein

    2004-03-01

    This report presents the status of Halden Virtual Reality Centre (HVRC) work with technological solutions for wearable computing to support operations where interaction and control of wearable information and communication systems for plant floor personnel are of importance. The report describes a framework and system prototype developed for testing technology, usability and applicability of eye movements and speech for controlling wearable equipment while having both hands free. Potentially interesting areas for further development are discussed with regard to the effect they have on the work situation for plant floor personnel using computerised wearable systems. (Author)

  14. Picture processing computer to control movement by computer provided vision

    Energy Technology Data Exchange (ETDEWEB)

    Graefe, V

    1983-01-01

    The author introduces a multiprocessor system which has been specially developed to enable mechanical devices to interpret pictures presented in real time. The separate processors within this system operate simultaneously and independently. By means of freely moveable windows the processors can concentrate on those parts of the picture that are relevant to the control problem. If a machine is to make a correct response to its observation of a picture of moving objects, it must be able to follow the picture sequence, step by step, in real time. As the usual serially operating processors are too slow for such a task, the author describes three models of a special picture processing computer which it has been necessary to develop. 3 references.

  15. Intelligent distributed control for nuclear power plants

    International Nuclear Information System (INIS)

    Klevans, E.H.; Edwards, R.M.; Ray, A.; Lee, K.Y.; Garcia, H.E.: Chavez, C.M.; Turso, J.A.; BenAbdennour, A.

    1991-01-01

    In September of 1989 work began on the DOE University Program grant DE-FG07-89ER12889. The grant provides support for a three year project to develop and demonstrate Intelligent Distributed Control (IDC) for Nuclear Power Plants. The body of this Second Annual Technical Progress report covers the period from September 1990 to September 1991. It summarizes the second year accomplishments while the appendices provide detailed information presented at conference meetings. These are two primary goals of this research. The first is to combine diagnostics and control to achieve a highly automated power plant as described by M.A. Schultz, a project consultant during the first year of the project. This philosophy, as presented in the first annual technical progress report, is to improve public perception of the safety of nuclear power plants by incorporating a high degree automation where greatly simplified operator control console minimizes the possibility of human error in power plant operations. A hierarchically distributed control system with automated responses to plant upset conditions is the focus of our research to achieve this goal. The second goal is to apply this research to develop a prototype demonstration on an actual power plant system, the EBR-II steam plant

  16. Storm blueprints patterns for distributed real-time computation

    CERN Document Server

    Goetz, P Taylor

    2014-01-01

    A blueprints book with 10 different projects built in 10 different chapters which demonstrate the various use cases of storm for both beginner and intermediate users, grounded in real-world example applications.Although the book focuses primarily on Java development with Storm, the patterns are more broadly applicable and the tips, techniques, and approaches described in the book apply to architects, developers, and operations.Additionally, the book should provoke and inspire applications of distributed computing to other industries and domains. Hadoop enthusiasts will also find this book a go

  17. Job monitoring on DIRAC for Belle II distributed computing

    Science.gov (United States)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  18. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    Science.gov (United States)

    2011-08-09

    heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale flop rates by the end of the decade. In this framework...and Lamb. Page 10 of 10 UNCLASSIFIED [3] Skaugen, K., Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com

  19. A High-Availability, Distributed Hardware Control System Using Java

    Science.gov (United States)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  20. Digital computer control of a research nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan

    1986-01-01

    Currently, the use of digital computers in energy producing systems has been limited to data acquisition functions. These computers have greatly reduced human involvement in the moment to moment decision process and the crisis decision process, thereby improving the safety of the dynamic energy producing systems. However, in addition to data acquisition, control of energy producing systems also includes data comparison, decision making, and control actions. The majority of the later functions are accomplished through the use of analog computers in a distributed configuration. The lack of cooperation and hence, inefficiency in distributed control, and the extent of human interaction in critical phases of control have provided the incentive to improve the later three functions of energy systems control. Properly applied, centralized control by digital computers can increase efficiency by making the system react as a single unit and by implementing efficient power changes to match demand. Additionally, safety will be improved by further limiting human involvement to action only in the case of a failure of the centralized control system. This paper presents a hardware and software design for the centralized control of a research nuclear reactor by a digital computer. Current nuclear reactor control philosophies which include redundancy, inherent safety in failure, and conservative yet operational scram initiation were used as the bases of the design. The control philosophies were applied to the power monitoring system, the fuel temperature monitoring system, the area radiation monitoring system, and the overall system interaction. Unlike the single function analog computers that are currently used to control research and commercial reactors, this system will be driven by a multifunction digital computer. Specifically, the system will perform control rod movements to conform with operator requests, automatically log the required physical parameters during reactor

  1. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  2. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  3. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  4. DIRAC - Distributed Infrastructure with Remote Agent Control

    CERN Document Server

    Tsaregorodtsev, A; Closier, J; Frank, M; Gaspar, C; van Herwijnen, E; Loverre, F; Ponce, S; Graciani Diaz, R.; Galli, D; Marconi, U; Vagnoni, V; Brook, N; Buckley, A; Harrison, K; Schmelling, M; Egede, U; Bogdanchikov, A; Korolko, I; Washbrook, A; Palacios, J P; Klous, S; Saborido, J J; Khan, A; Pickford, A; Soroko, A; Romanovski, V; Patrick, G N; Kuznetsov, G; Gandelman, M

    2003-01-01

    This paper describes DIRAC, the LHCb Monte Carlo production system. DIRAC has a client/server architecture based on: Compute elements distributed among the collaborating institutes; Databases for production management, bookkeeping (the metadata catalogue) and software configuration; Monitoring and cataloguing services for updating and accessing the databases. Locally installed software agents implemented in Python monitor the local batch queue, interrogate the production database for any outstanding production requests using the XML-RPC protocol and initiate the job submission. The agent checks and, if necessary, installs any required software automatically. After the job has processed the events, the agent transfers the output data and updates the metadata catalogue. DIRAC has been successfully installed at 18 collaborating institutes, including the DataGRID, and has been used in recent Physics Data Challenges. In the near to medium term future we must use a mixed environment with different types of grid mid...

  5. Distributed process control system for remote control and monitoring of the TFTR tritium systems

    International Nuclear Information System (INIS)

    Schobert, G.; Arnold, N.; Bashore, D.; Mika, R.; Oliaro, G.

    1989-01-01

    This paper reviews the progress made in the application of a commercially available distributed process control system to support the requirements established for the Tritium REmote Control And Monitoring System (TRECAMS) of the Tokamak Fusion Test REactor (TFTR). The system that will discussed was purchased from Texas (TI) Instruments Automation Controls Division), previously marketed by Rexnord Automation. It consists of three, fully redundant, distributed process controllers interfaced to over 1800 analog and digital I/O points. The operator consoles located throughout the facility are supported by four Digital Equipment Corporation (DEC) PDP-11/73 computers. The PDP-11/73's and the three process controllers communicate over a fully redundant one megabaud fiber optic network. All system functionality is based on a set of completely integrated databases loaded to the process controllers and the PDP-11/73's. (author). 2 refs.; 2 figs

  6. Intelligent distributed control for nuclear power plants

    International Nuclear Information System (INIS)

    Klevans, E.H.

    1993-01-01

    This project was initiated in September 1989 as a three year project to develop and demonstrate Intelligent Distributed Control (IDC) for Nuclear Power Plants. There were two primary goals of this research project. The first goal was to combine diagnostics and control to achieve a highly automated power plant as described by M.A. Schultz. The second goal was to apply this research to develop a prototype demonstration on an actual power plant system, the EBR-2 steam plant. Described in this Final (Third Annual) Technical Progress Report is the accomplishment of the project's final milestone, an in-plant intelligent control experiment conducted on April 1, 1993. The development of the experiment included: simulation validation, experiment formulation and final programming, procedure development and approval, and experimental results. Other third year developments summarized in this report are: (1) a theoretical foundation for Reconfigurable Hybrid Supervisory Control, (2) a steam plant diagnostic system, (3) control console design tools and (4) other advanced and intelligent control

  7. Flexible distributed architecture for semiconductor process control and experimentation

    Science.gov (United States)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  8. Manufacturing and application of micro computer for control

    International Nuclear Information System (INIS)

    Park, Seung Man; Heo, Gyeong; Yun, Jun Young

    1990-05-01

    This book deals with machine code and assembly program for micro computer. It composed of 20 chapters, which are micro computer system, practice of a storage cell, manufacturing 1 of micro computer, manufacturing 2 of micro computer, manufacturing of micro computer AID-80A, making of machine language, interface like Z80-PIO and 8255A(PPI), counter and timer interface, exercise of basic command, arithmetic operation, arrangement operation, an indicator control, music playing, detection of input of PIO. control of LED of PIO, PIO mode, CTC control by micro computer, SIO control by micro computer and application by micro computer.

  9. Declarative flow control for distributed instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Parvin, Bahram; Taylor, John; Fontenay, Gerald; Callahan, Daniel

    2001-06-01

    We have developed a 'microscopy channel' to advertise a unique set of on-line scientific instruments and to let users join a particular session, perform an experiment, collaborate with other users, and collect data for further analysis. The channel is a collaborative problem solving environment (CPSE) that allows for both synchronous and asynchronous collaboration, as well as flow control for enhanced scalability. The flow control is a declarative feature that enhances software functionality at the experimental scale. Our testbed includes several unique electron and optical microscopes with applications ranging from material science to cell biology. We have built a system that leverages current commercial CORBA services, Web Servers, and flow control specifications to meet diverse requirements for microscopy and experimental protocols. In this context, we have defined and enhanced Instrument Services (IS), Exchange Services (ES), Computational Services (CS), and Declarative Services (DS) that sit on top of CORBA and its enabling services (naming, trading, security, and notification) IS provides a layer of abstraction for controlling any type of microscope. ES provides a common set of utilities for information management and transaction. CS provides the analytical capabilities needed for online microscopy. DS provides mechanisms for flow control for improving the dynamic behavior of the system.

  10. Distributed Model Predictive Control for Active Power Control of Wind Farm

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Rasmussen, Claus Nygaard

    2014-01-01

    This paper presents the active power control of a wind farm using the Distributed Model Predictive Controller (D- MPC) via dual decomposition. Different from the conventional centralized wind farm control, multiple objectives such as power reference tracking performance and wind turbine load can...... be considered to achieve a trade-off between them. Additionally, D- MPC is based on communication among the subsystems. Through the interaction among the neighboring subsystems, the global optimization could be achieved, which significantly reduces the computation burden. It is suitable for the modern large......-scale wind farm control....

  11. Aircraft Interior Noise Control Using Distributed Piezoelectric Actuators

    Science.gov (United States)

    Sun, Jian Q.

    1996-01-01

    Developing a control system that can reduce the noise and structural vibration at the same time is an important task. This talk presents one possible technical approach for accomplishing this task. The target application of the research is for aircraft interior noise control. The emphasis of the present approach is not on control strategies, but rather on the design of actuators for the control system. In the talk, a theory of distributed piezoelectric actuators is introduced. A uniform cylindrical shell is taken as a simplified model of fuselage structures to illustrate the effectiveness of the design theory. The actuators developed are such that they can reduce the tonal structural vibration and interior noise in a wide range of frequencies. Extensive computer simulations have been done to study various aspects of the design theory. Experiments have also been conducted and the test results strongly support the theoretical development.

  12. KeyWare: an open wireless distributed computing environment

    Science.gov (United States)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  13. DISTRIBUTED GENERATION OF COMPUTER MUSIC IN THE INTERNET OF THINGS

    Directory of Open Access Journals (Sweden)

    G. G. Rogozinsky

    2015-07-01

    Full Text Available Problem Statement. The paper deals with distributed intelligent multi-agent system for computer music generation. A mathematical model for data extraction from the environment and their application in the music generation process is proposed. Methods. We use Resource Description Framework for representation of timbre data. A special musical programming language Csound is used for subsystem of synthesis and sound processing. Sound generation occurs according to the parameters of compositional model, getting data from the outworld. Results. We propose architecture of a potential distributed system for computer music generation. An example of core sound synthesis is presented. We also propose a method for mapping real world parameters to the plane of compositional model, in an attempt to imitate elements and aspects of creative inspiration. Music generation system has been represented as an artifact in the Central Museum of Communication n.a. A.S. Popov in the framework of «Night of Museums» action. In the course of public experiment it was stated that, in the whole, the system tends to a quick settling of neutral state with no musical events generation. This proves the necessity of algorithms design for active condition support of agents’ network, in the whole. Practical Relevance. Realization of the proposed system will give the possibility for creation of a technological platform for a whole new class of applications, including augmented acoustic reality and algorithmic composition.

  14. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  15. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  16. Systematic control of large computer programs

    International Nuclear Information System (INIS)

    Goedbloed, J.P.; Klieb, L.

    1986-07-01

    A package of CCL, UPDATE, and FORTRAN procedures is described which facilitates the systematic control and development of large scientific computer programs. The package provides a general tool box for this purpose which contains many conveniences for the systematic administration of files, editing, reformating of line printer output files, etc. In addition, a small number of procedures is devoted to the problem of structured development of a large computer program which is used by a group of scientists. The essence of the method is contained in three procedures N, R, and X for the creation of a new UPDATE program library, its revision, and execution, resp., and a procedure REVISE which provides a joint editor - UPDATE session which combines the advantages of the two systems, viz. speed and rigor. (Auth.)

  17. Distributed Reactive Power Control based Conservation Voltage Reduction in Active Distribution Systems

    Directory of Open Access Journals (Sweden)

    EMIROGLU, S.

    2017-11-01

    Full Text Available This paper proposes a distributed reactive power control based approach to deploy Volt/VAr optimization (VVO / Conservation Voltage Reduction (CVR algorithm in a distribution network with distributed generations (DG units and distribution static synchronous compensators (D-STATCOM. A three-phase VVO/CVR problem is formulated and the reactive power references of D-STATCOMs and DGs are determined in a distributed way by decomposing the VVO/CVR problem into voltage and reactive power control. The main purpose is to determine the coordination between voltage regulator (VR and reactive power sources (Capacitors, D-STATCOMs and DGs based on VVO/CVR. The study shows that the reactive power injection capability of DG units may play an important role in VVO/CVR. In addition, it is shown that the coordination of VR and reactive power sources does not only save more energy and power but also reduces the power losses. Moreover, the proposed VVO/CVR algorithm reduces the computational burden and finds fast solutions. To illustrate the effectiveness of the proposed method, the VVO/CVR is performed on the IEEE 13-node test system feeder considering unbalanced loading and line configurations. The tests are performed taking the practical voltage-dependent load modeling and different customer types into consideration to improve accuracy.

  18. Cardea: Dynamic Access Control in Distributed Systems

    Science.gov (United States)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  19. Distributing the computation in combinatorial optimization experiments over the cloud

    Directory of Open Access Journals (Sweden)

    Mario Brcic

    2017-12-01

    Full Text Available Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters. Since the process is also iterative and many ideas and hypotheses may be tested, execution time of each experiment has an important role in the efficiency and successfulness. Structure of such experiments allows for significant execution time improvement by distributing the computation. We focus on the cloud computing as a cost-efficient solution in these circumstances. In this paper we present a system for validating and comparing stochastic combinatorial optimization algorithms. The system also deals with selection of the optimal settings for computational nodes and number of nodes in terms of performance-cost tradeoff. We present applications of the system on a new class of project scheduling problem. We show that we can optimize the selection over cloud service providers as one of the settings and, according to the model, it resulted in a substantial cost-savings while meeting the deadline.

  20. Pervasive Computing, Privacy and Distribution of the Self

    Directory of Open Access Journals (Sweden)

    Soraj Hongladarom

    2011-05-01

    Full Text Available The emergence of what is commonly known as “ambient intelligence” or “ubiquitous computing” means that our conception of privacy and trust needs to be reconsidered. Many have voiced their concerns about the threat to privacy and the more prominent role of trust that have been brought about by emerging technologies. In this paper, I will present an investigation of what this means for the self and identity in our ambient intelligence environment. Since information about oneself can be actively distributed and processed, it is proposed that in a significant sense it is the self itself that is distributed throughout a pervasive or ubiquitous computing network when information pertaining to the self of the individual travels through the network. Hence privacy protection needs to be extended to all types of information distributed. It is also recommended that appropriately strong legislation on privacy and data protection regarding this pervasive network is necessary, but at present not sufficient, to ensure public trust. What is needed is a campaign on public awareness and positive perception of the technology.

  1. Management tools for distributed control system in KSTAR

    International Nuclear Information System (INIS)

    Sangil Lee; Jinseop Park; Jaesic Hong; Mikyung Park; Sangwon Yun

    2012-01-01

    The integrated control system of the Korea Superconducting Tokamak Advanced Research (KSTAR) has been developed with distributed control systems based on Experimental Physics and Industrial Control System (EPICS) middle-ware. It has the essential role of remote operation, supervising of tokamak device and conducting of plasma experiments without any interruption. Therefore, the availability of the control system directly impacts on the entire device performance. For the non-interrupted operation of the KSTAR control system, we have developed a tool named as Control System Monitoring (CSM) to monitor the resources of EPICS Input/Output Controller (IOC) servers (utilization of memory, cpu, disk, network, user-defined process and system-defined process), the soundness of storage systems (storage utilization, storage status), the status of network switches using Simple Network Management Protocol (SNMP), the network connection status of every local control sever using Internet Control Message Protocol (ICMP), and the operation environment of the main control room and the computer room (temperature, humidity, electricity) in real time. When abnormal conditions or faults are detected by the CSM, it alerts abnormal or fault alarms to operators. Especially, if critical fault related to the data storage occurs, the CSM sends the simple messages to operator's mobile phone. The operators then quickly restored the problems according to the emergency procedure. As a result of this process, KSTAR was able to perform continuous operation and experiment without interruption for 4 months

  2. Plancton: an opportunistic distributed computing project based on Docker containers

    Science.gov (United States)

    Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara

    2017-10-01

    The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.

  3. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  4. Using Model Checking for Analyzing Distributed Power Control Problems

    DEFF Research Database (Denmark)

    Brihaye, Thomas; Jungers, Marc; Lasaulce, Samson

    2010-01-01

    Model checking (MC) is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control ( PC) problem can be modeled by a timed game between a given transmitter and its environment, the authors...... objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired...

  5. Computer-controlled radiation monitoring system

    International Nuclear Information System (INIS)

    Homann, S.G.

    1994-01-01

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory's Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable

  6. Multiaxis, Lightweight, Computer-Controlled Exercise System

    Science.gov (United States)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  7. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  8. Context-aware distributed cloud computing using CloudScheduler

    Science.gov (United States)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  9. Quality control in quantitative computed tomography

    International Nuclear Information System (INIS)

    Jessen, K.A.; Joergensen, J.

    1989-01-01

    Computed tomography (CT) has for several years been an indispensable tool in diagnostic radiology, but it is only recently that extraction of quantitative information from CT images has been of practical clinical value. Only careful control of the scan parameters, and especially the scan geometry, allows useful information to be obtained; and it can be demonstrated by simple phantom measurements how sensitive a CT system can be to variations in size, shape and position of the phantom in the gantry aperture. Significant differences exist between systems that are not manifested in normal control of image quality and general performance tests. Therefore an actual system has to be analysed for its suitability for quantitative use of the images before critical clinical applications are justified. (author)

  10. A reconfigurable strategy for distributed digital process control

    International Nuclear Information System (INIS)

    Garcia, H.E.; Ray, A.; Edwards, R.M.

    1990-01-01

    A reconfigurable control scheme is proposed which, unlike a preprogrammed one, uses stochastic automata to learn the current operating status of the environment (i.e., the plant, controller, and communication network) by dynamically monitoring the system performance and then switching to the appropriate controller on the basis of these observations. The potential applicability of this reconfigurable control scheme to electric power plants is being investigated. The plant under consideration is the Experimental Breeder Reactor (EBR-II) at the Argonne National Laboratory site in Idaho. The distributed control system is emulated on a ring network where the individual subsystems are hosted as follows: (1) the reconfigurable control modules are located in one of the network modules called Multifunction Controller; (2) the learning modules are resident in a VAX 11/785 mainframe computer; and (3) a detailed model of the plant under control is executed in the same mainframe. This configuration is a true representation of the network-based control system in the sense that it operates in real time and is capable of interacting with the actual plant

  11. Control and operation of distributed generation in distribution systems

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2011-01-01

    Many distribution systems nowadays have significant penetration of distributed generation (DG)and thus, islanding operation of these distribution systems is becoming a viable option for economical and technical reasons. The DG should operate optimally during both grid-connected and island...... algorithm, which uses average rate of change off requency (Af5) and real power shift RPS), in the islanded mode. RPS will increase or decrease the power set point of the generator with increasing or decreasing system frequency, respectively. Simulation results show that the proposed method can operate...

  12. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  13. 13th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Silvestri, Marcello; González, Sara

    2016-01-01

    The special session Decision Economics (DECON) 2016 is a scientific forum by which to share ideas, projects, researches results, models and experiences associated with the complexity of behavioral decision processes aiming at explaining socio-economic phenomena. DECON 2016 held in the University of Seville, Spain, as part of the 13th International Conference on Distributed Computing and Artificial Intelligence (DCAI) 2016. In the tradition of Herbert A. Simon’s interdisciplinary legacy, this book dedicates itself to the interdisciplinary study of decision-making in the recognition that relevant decision-making takes place in a range of critical subject areas and research fields, including economics, finance, information systems, small and international business, management, operations, and production. Decision-making issues are of crucial importance in economics. Not surprisingly, the study of decision-making has received a growing empirical research efforts in the applied economic literature over the last ...

  14. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  15. Computational optimization of catalyst distributions at the nano-scale

    International Nuclear Information System (INIS)

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  16. Adaptive, Distributed Control of Constrained Multi-Agent Systems

    Science.gov (United States)

    Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.

  17. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  18. Using Model Checking for Analyzing Distributed Power Control Problems

    Directory of Open Access Journals (Sweden)

    Thomas Brihaye

    2010-01-01

    Full Text Available Model checking (MC is a formal verification technique which has been known and still knows a resounding success in the computer science community. Realizing that the distributed power control (PC problem can be modeled by a timed game between a given transmitter and its environment, the authors wanted to know whether this approach can be applied to distributed PC. It turns out that it can be applied successfully and allows one to analyze realistic scenarios including the case of discrete transmit powers and games with incomplete information. The proposed methodology is as follows. We state some objectives a transmitter-receiver pair would like to reach. The network is modeled by a game where transmitters are considered as timed automata interacting with each other. The objectives are then translated into timed alternating-time temporal logic formulae and MC is exploited to know whether the desired properties are verified and determine a winning strategy.

  19. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  20. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  1. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  2. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Iov, Florin; Hägerling, Christian

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the qu......The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...

  3. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Schwefel, Hans-Peter; Silva, Nuno; Olsen, Rasmus Løvenstein

    2018-01-01

    Off-the shelf wireless communication technologies reduce infrastructure deployment costs and are thus attractive for distribution system control. Wireless communication however may lead to variable network performance. Hence the impact of this variability on overall distribution system control be...

  4. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    Science.gov (United States)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  5. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    International Nuclear Information System (INIS)

    Cai, K.; Wonham, W. M.

    2009-01-01

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  6. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  7. Computer utility for interactive instrument control

    International Nuclear Information System (INIS)

    Day, P.

    1975-08-01

    A careful study of the ANL laboratory automation needs in 1967 led to the conclusion that a central computer could support all of the real-time needs of a diverse collection of research instruments. A suitable hardware configuration would require an operating system to provide effective protection, fast real-time response and efficient data transfer. An SDS Sigma 5 satisfied all hardware criteria, however it was necessary to write an original operating system; services include program generation, experiment control real-time analysis, interactive graphics and final analysis. The system is providing real-time support for 21 concurrently running experiments, including an automated neutron diffractometer, a pulsed NMR spectrometer and multi-particle detection systems. It guarantees the protection of each user's interests and dynamically assigns core memory, disk space and 9-track magnetic tape usage. Multiplexor hardware capability allows the transfer of data between a user's device and assigned core area at rates of 100,000 bytes/sec. Real-time histogram generation for a user can proceed at rates of 50,000 points/sec. The facility has been self-running (no computer operator) for five years with a mean time between failures of 10 []ays and an uptime of 157 hours/week. (auth)

  8. Computer-controlled wall servicing robot

    Energy Technology Data Exchange (ETDEWEB)

    Lefkowitz, S. [Pentek, Inc., Corapolis, PA (United States)

    1995-03-01

    After four years of cooperative research, Pentek has unveiled a new robot with the capability to automatically deliver a variety of cleaning, painting, inspection, and surveillance devices to large vertical surfaces. The completely computer-controlled robot can position a working tool on a 50-foot tall by 50-foot wide vertical surface with a repeatability of 1/16 inch. The working end can literally {open_quotes}fly{close_quotes} across the face of a wall at speed of 60 per minute, and can handle working loads of 350 pounds. The robot was originally developed to decontaminate the walls of reactor fueling cavities at commercial nuclear power plants during fuel outages. If these cavities are left to dry after reactor refueling, contamination present in the residue could later become airborne and move throughout the containment building. Decontaminating the cavity during the refueling outage reduces the need for restrictive personal protective equipment during plant operations to limit the dose rates.

  9. Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)

    National Research Council Canada - National Science Library

    Behbahani, Alireza R

    2007-01-01

    .... Distributed control is potentially an enabling technology for advanced intelligent propulsion system concepts and is one of the few control approaches that is able to provide improved component...

  10. Distributed Autonomous Control of Multiple Spacecraft During Close Proximity Operations

    National Research Council Canada - National Science Library

    McCamish, Shawn B

    2007-01-01

    This research contributes to multiple spacecraft control by developing an autonomous distributed control algorithm for close proximity operations of multiple spacecraft systems, including rendezvous...

  11. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  12. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  13. Use of the Web by a Distributed Research group Performing Distributed Computing

    Science.gov (United States)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  14. A role for distributed processing in advanced nuclear materials control and accountability systems

    International Nuclear Information System (INIS)

    Tisinger, R.M.; Whitty, W.J.; Ford, W.; Strittmatter, R.B.

    1986-01-01

    Networking and distributed processing hardware and software have the potential of greatly enhancing nuclear materials control and account-ability (MCandA) systems, both from safeguards and process operations perspectives while allowing timely integrated safeguards activities and enhanced computer security at reasonable cost. A hierarchical distributed system is proposed consisting of groups of terminals and instruments in plant production and support areas connected to microprocessors that are connected to either larger microprocessors or minicomputers. The structuring and development of a limited distributed MCandA prototype system, including human engineering concepts, are described. Implications of integrated safeguards and computer security concepts to the distributed system design are discussed

  15. Evaluation of Corba for use in distributed control systems

    International Nuclear Information System (INIS)

    Holloway, F.W.; Arsdall, P. van

    1999-01-01

    The Common Object Request Broker Architecture (CORBA)-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about advanced distributed control system architectures. A three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios was used in the LDRD project. This input report describes the first of the three approaches the study of object-oriented distribution tools together with measurements, and predictions of use within the National Ignition Facility (NIF) and some aspects of CORBA which remain to be resolved. For the ICCS, the completeness of suitable functionality, the speed of performance and utilization of machine and network resources, and the developing nature of the commercial CORBA products themselves, presented a certain risk. This LDRD thus evaluated CORBA in general, and a particular implementation, to determine its features, performance, and scaling properties, and to optimize its use within the ICCS. Both UNIX and real-time operating systems were studied

  16. Control Architecture for Intentional Island Operation in Distribution Network with High Penetration of Distributed Generation

    DEFF Research Database (Denmark)

    Chen, Yu

    , the feasibility of the application of Artificial Neural Network (ANN) to ICA is studied, in order to improve the computation efficiency for ISR calculation. Finally, the integration of ICA into Dynamic Security Assessment (DSA), the ICA implementation, and the development of ICA are discussed....... to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. One proposal is the intentional island operation. This PhD project is intended to develop a control architecture for the island operation in distribution system with high...... amount of DGs. As part of the NextGen project, this project focuses on the system modeling and simulation regarding the control architecture and recommends the development of a communication and information exchange system based on IEC 61850. This thesis starts with the background of this PhD project...

  17. NQR spectrometer controlled by a computer

    International Nuclear Information System (INIS)

    Stoican, Ovidiu

    2002-01-01

    Nuclear quadrupole resonance (NQR) is one of the sensitive methods for studying physical and chemical properties of a substance, such as chemical composition, molecular structure, molecular motion and electronic environment. The specifications of the research project require the use of a nuclear quadrupole resonance spectrometer. Design and performances of a pulsed nuclear quadrupole resonance spectrometer prototype covering the range 1-10 MHz are presented. The pulsed NQR method offers considerably higher sensitivity than either the marginal oscillator or super-regenerative methods. Strong echoes are often observed directly with an oscilloscope or a simple receiver. The method allows us to observe two signal categories: free induction decay (fid) and echoes. The block diagram of the pulsed nuclear quadrupole resonance spectrometer is shown. All operations performed by the spectrometer are controlled by a computer. The scanning frequency range, amplitude and width of the RF pulse, additional magnetic field and sample temperature can be controlled by the software. Also it is possible to improve the signal-to-noise ratio using digital filtering applied to the data stored. Automatic operation eliminates operator skill and uncertainty of manual operation. The NQR spectrometer control software is a stand alone executable file, runs on Windows 95/98 platform and does not require the existence of another software package. A graphical interface allows to user an easy control over the spectrometer operations. All measured parameters by the control system interface are saved in the standard data files and can be processed further. The design is readily adaptable for other applications. The sample is contained within an aluminum cylindrical case. The upper end cap of the case can be removed and it allows introducing the sample. On the upper end cap RF and main temperature sensor connector are placed. On the internal side of the bottom end cap a thermoelectric cooler (MELCOR

  18. Computer-based control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Kalashnikov, V.K.; Shugam, R.A.; Ol'shevsky, Yu.N.

    1975-01-01

    Computer-based control systems of nuclear power plants may be classified into those using computers for data acquisition only, those using computers for data acquisition and data processing, and those using computers for process control. In the present paper a brief review is given of the functions the systems above mentioned perform, their applications in different nuclear power plants, and some of their characteristics. The trend towards hierarchic systems using control computers with reserves already becomes clear when consideration is made of the control systems applied in the Canadian nuclear power plants that pertain to the first ones equipped with process computers. The control system being now under development for the large Soviet reactors of WWER type will also be based on the use of control computers. That part of the system concerned with controlling the reactor assembly is described in detail

  19. Controlling data transfers from an origin compute node to a target compute node

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-06-21

    Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

  20. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  1. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  2. VAR control in distribution systems by using artificial intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Golkar, M.A. [Curtin Univ. of Technology, Sarawak (Malaysia). School of Engineering and Science

    2005-07-01

    This paper reviewed artificial intelligence techniques used in VAR control systems. Reactive power controls in distribution systems were also reviewed. While artificial intelligence methods are widely used in power control systems, the techniques require extensive human knowledge bases and experiences in order to operate correctly. Expert systems use knowledge and interface procedures to solve problems that often require human expertise. Expert systems often cause knowledge bottlenecks as they are unable to learn or adopt to new situations. While neural networks possess learning ability, they are computationally expensive. However, test results in recent neural network studies have demonstrated that they work well in a variety of loading conditions. Fuzzy logic techniques are used to accurately represent the operational constraints of power systems. Fuzzy logic has an advantage over other artificial intelligence techniques as it is able to remedy uncertainties in data. Evolutionary computing algorithms use probabilistic transition rules which can search complicated data to determine optimal constraints and parameters. Over 95 per cent of all papers published on power systems use genetic algorithms. It was concluded that hybrid systems using various artificial intelligence techniques are now being used by researchers. 69 refs.

  3. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  4. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  5. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  6. Distributed Control in Multi-Vehicle Systems

    Directory of Open Access Journals (Sweden)

    Paul A. Avery

    2013-12-01

    Full Text Available The Southwest Research Institute (SwRI Mobile Autonomous Robotics Technology Initiative (MARTI program has enabled the development of fully-autonomous passenger-sized commercial vehicles and military tactical vehicles, as well as the development of cooperative vehicle behaviors, such as cooperative sensor sharing and cooperative convoy operations. The program has also developed behaviors to interface intelligent vehicles with intelligent road-side devices. The development of intelligent vehicle behaviors cannot be approached as stand-alone phenomena; rather, they must be understood within a context of the broader traffic system dynamics. The study of other complex systems has shown that system-level behaviors emerge as a result of the spatio-temporal dynamics within a system's constituent parts. The design of such systems must therefore account for both the system-level emergent behavior, as well as behaviors of individuals within the system. It has also become clear over the past several years, for both of these domains, that human trust in the behavior of individual vehicles is paramount to broader technology adoption. This paper examines the interplay between individual vehicle capabilities, vehicle connectivity, and emergent system behaviors, and presents some considerations for a distributed control paradigm in a multi-vehicle system.

  7. Applied optimal control theory of distributed systems

    CERN Document Server

    Lurie, K A

    1993-01-01

    This book represents an extended and substantially revised version of my earlierbook, Optimal Control in Problems ofMathematical Physics,originally published in Russian in 1975. About 60% of the text has been completely revised and major additions have been included which have produced a practically new text. My aim was to modernize the presentation but also to preserve the original results, some of which are little known to a Western reader. The idea of composites, which is the core of the modern theory of optimization, was initiated in the early seventies. The reader will find here its implementation in the problem of optimal conductivity distribution in an MHD-generatorchannel flow.Sincethen it has emergedinto an extensive theory which is undergoing a continuous development. The book does not pretend to be a textbook, neither does it offer a systematic presentation of the theory. Rather, it reflects a concept which I consider as fundamental in the modern approach to optimization of dis­ tributed systems. ...

  8. Computer controlled vacuum control system for synchrotron radiation beam lines

    International Nuclear Information System (INIS)

    Goldberg, S.M.; Wang, C.; Yang, J.

    1983-01-01

    The increasing number and complexity of vacuum control systems at the Stanford Synchrotron Radiation Laboratory has resulted in the need to computerize its operations in order to lower costs and increase efficiency of operation. Status signals are transmitted through digital and analog serial data links which use microprocessors to monitor vacuum status continuously. Each microprocessor has a unique address and up to 256 can be connected to the host computer over a single RS232 data line. A FORTRAN program on the host computer will request status messages and send control messages via only one RS232 line per beam line, signal the operator when a fault condition occurs, take automatic corrective actions, warn of impending valve failure, and keep a running log of all changes in vacuum status for later recall. Wiring costs are thus greatly reduced and more status conditions can be monitored without adding excessively to the complexity of the system. Operators can then obtain status reports at various locations in the lab quickly without having to read a large number of meter and LED's

  9. Taxonomy for Evaluation of Distributed Control Strategies for Distributed Energy Resources

    DEFF Research Database (Denmark)

    Han, Xue; Heussen, Kai; Gehrke, Oliver

    2017-01-01

    Distributed control strategies applied to power distribution control problems are meant to offer robust and scalable integration of distributed energy resources (DER). However, the term “distributed control” is often loosely applied to a variety of very different control strategies. In particular....... For such comparison, a classification is required that is consistent across the different aspects mentioned above. This paper develops systematic categories of control strategies that accounts for communication, control and physical distribution aspects of the problem, and provides a set of criteria that can...

  10. Control and Operation of Islanded Distribution System

    DEFF Research Database (Denmark)

    Mahat, Pukar

    deviation and real power shift. When a distribution system, with all its generators operating at maximum power, is islanded, the frequency will go down if the total load is more than the total generation. An under-frequency load shedding procedure for islanded distribution systems with DG unit(s) based...... states. Short circuit power also changes when some of the generators in the distribution system are disconnected. This may result in elongation of fault clearing time and hence disconnection of equipments (including generators) in the distribution system or unnecessary operation of protective devices...... operational challenges. But, on the other hand, it has also opened up some opportunities. One opportunity/challenge is an islanded operation of a distribution system with DG unit(s). Islanding is a situation in which a distribution system becomes electrically isolated from the remainder of the power system...

  11. Autonomous control of distributed storages in microgrids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    Operation of distributed generators in microgrids has widely been discussed, but would not be fully autonomous, if distributed storages are not considered. Storages in general are important, since they provide energy buffering to load changes, energy leveling to source variations and ride......-through enhancement to the overall microgrids. Recognizing their importance, this paper presents a scheme for sharing power among multiple distributed storages, in coordination with the distributed sources and loads. The scheme prompts the storages to autonomously sense for system conditions, requesting for maximum...

  12. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  13. Robotic Automation in Computer Controlled Polishing

    Science.gov (United States)

    Walker, D. D.; Yu, G.; Bibby, M.; Dunn, C.; Li, H.; Wu, Y.; Zheng, X.; Zhang, P.

    2016-02-01

    We first present a Case Study - the manufacture of 1.4 m prototype mirror-segments for the European Extremely Large Telescope, undertaken by the National Facility for Ultra Precision Surfaces, at the OpTIC facility operated by Glyndwr University. Scale-up to serial-manufacture demands delivery of a 1.4 m off-axis aspheric hexagonal segment with surface precision robots and computer numerically controlled ('CNC') polishing machines for optical fabrication. The objective was not to assess which is superior. Rather, it was to understand for the first time their complementary properties, leading us to operate them together as a unit, integrated in hardware and software. Three key areas are reported. First is the novel use of robots to automate currently-manual operations on CNC polishing machines, to improve work-throughput, mitigate risk of damage to parts, and reduce dependence on highly-skilled staff. Second is the use of robots to pre-process surfaces prior to CNC polishing, to reduce total process time. The third draws the threads together, describing our vision of the automated manufacturing cell, where the operator interacts at cell rather than machine level. This promises to deliver a step-change in end-to-end manufacturing times and costs, compared with either platform used on its own or, indeed, the state-of-the-art used elsewhere.

  14. Massive calculations of electrostatic potentials and structure maps of biopolymers in a distributed computing environment

    International Nuclear Information System (INIS)

    Akishina, T.P.; Ivanov, V.V.; Stepanenko, V.A.

    2013-01-01

    Among the key factors determining the processes of transcription and translation are the distributions of the electrostatic potentials of DNA, RNA and proteins. Calculations of electrostatic distributions and structure maps of biopolymers on computers are time consuming and require large computational resources. We developed the procedures for organization of massive calculations of electrostatic potentials and structure maps for biopolymers in a distributed computing environment (several thousands of cores).

  15. Future Communication, Computing, Control and Management Volume 2

    CERN Document Server

    2012-01-01

    This volume contains revised and extended research articles written by prominent researchers participating in the ICF4C 2011 conference. 2011 International Conference on Future Communication, Computing, Control and Management (ICF4C 2011) has been held on December 16-17, 2011, Phuket, Thailand. Topics covered include intelligent computing, network management, wireless networks, telecommunication, power engineering, control engineering, Signal and Image Processing, Machine Learning, Control Systems and Applications, The book will offer the states of arts of tremendous advances in Computing, Communication, Control, and Management and also serve as an excellent reference work for researchers and graduate students working on Computing, Communication, Control, and Management Research.

  16. Future Communication, Computing, Control and Management Volume 1

    CERN Document Server

    2012-01-01

    This volume contains revised and extended research articles written by prominent researchers participating in the ICF4C 2011 conference. 2011 International Conference on Future Communication, Computing, Control and Management (ICF4C 2011) has been held on December 16-17, 2011, Phuket, Thailand. Topics covered include intelligent computing, network management, wireless networks, telecommunication, power engineering, control engineering, Signal and Image Processing, Machine Learning, Control Systems and Applications, The book will offer the states of arts of tremendous advances in Computing, Communication, Control, and Management and also serve as an excellent reference work for researchers and graduate students working on Computing, Communication, Control, and Management Research.

  17. Future Computing, Communication, Control and Management Volume 2

    CERN Document Server

    2012-01-01

    This volume contains revised and extended research articles written by prominent researchers participating in the ICF4C 2011 conference. 2011 International Conference on Future Communication, Computing, Control and Management (ICF4C 2011) has been held on December 16-17, 2011, Phuket, Thailand. Topics covered include intelligent computing, network management, wireless networks, telecommunication, power engineering, control engineering, Signal and Image Processing, Machine Learning, Control Systems and Applications, The book will offer the states of arts of tremendous advances in Computing, Communication, Control, and Management and also serve as an excellent reference work for researchers and graduate students working on Computing, Communication, Control, and Management Research.

  18. Evaluation of DEC`s GIGAswitch for distributed parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Hutchins, J.; Brandt, J.

    1993-10-01

    One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.

  19. Product Distribution Theory for Control of Multi-Agent Systems

    Science.gov (United States)

    Lee, Chia Fan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.

  20. Distributed Cooperative Secondary Control of Microgrids Using Feedback Linearization

    DEFF Research Database (Denmark)

    Bidram, Ali; Davoudi, Ali; Lewis, Frank

    2013-01-01

    This paper proposes a secondary voltage control of microgrids based on the distributed cooperative control of multi-agent systems. The proposed secondary control is fully distributed; each distributed generator (DG) only requires its own information and the information of some neighbors. The dist......This paper proposes a secondary voltage control of microgrids based on the distributed cooperative control of multi-agent systems. The proposed secondary control is fully distributed; each distributed generator (DG) only requires its own information and the information of some neighbors...... parameters can be tuned to obtain a desired response speed. The effectiveness of the proposed control methodology is verified by the simulation of a microgrid test system....

  1. Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites

    National Research Council Canada - National Science Library

    2002-01-01

    .... An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator...

  2. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  3. Micro-computer control for super-critical He generation

    International Nuclear Information System (INIS)

    Tamada, Noriharu; Sekine, Takehiro; Tomiyama, Sakutaro

    1979-01-01

    The development of a large scale refrigeration system is being stimulated by new superconducting techniques representated by a superconducting power cable and a magnet. For the practical operation of such a large system, an automatic control system with a computer is required, because it can attain an effective and systematic operation. For this reason, we examined and developed micro-computer control techniques for supercritical He generation, as a simplified control model of the refrigeration system. The experimental results showed that the computer control system can attain fine controlability, even if the control element is only one magnetic valve, but a BASIK program language of micro-computer, which is convinient and generaly used, isn't enough one to control a more complicated system, because of its low calculating speed. Then we conclude that a more effective program language for micro-computer must be developed to realize practical refrigeration control. (author)

  4. Data-Driven H∞ Control for Nonlinear Distributed Parameter Systems.

    Science.gov (United States)

    Luo, Biao; Huang, Tingwen; Wu, Huai-Ning; Yang, Xiong

    2015-11-01

    The data-driven H∞ control problem of nonlinear distributed parameter systems is considered in this paper. An off-policy learning method is developed to learn the H∞ control policy from real system data rather than the mathematical model. First, Karhunen-Loève decomposition is used to compute the empirical eigenfunctions, which are then employed to derive a reduced-order model (ROM) of slow subsystem based on the singular perturbation theory. The H∞ control problem is reformulated based on the ROM, which can be transformed to solve the Hamilton-Jacobi-Isaacs (HJI) equation, theoretically. To learn the solution of the HJI equation from real system data, a data-driven off-policy learning approach is proposed based on the simultaneous policy update algorithm and its convergence is proved. For implementation purpose, a neural network (NN)- based action-critic structure is developed, where a critic NN and two action NNs are employed to approximate the value function, control, and disturbance policies, respectively. Subsequently, a least-square NN weight-tuning rule is derived with the method of weighted residuals. Finally, the developed data-driven off-policy learning approach is applied to a nonlinear diffusion-reaction process, and the obtained results demonstrate its effectiveness.

  5. Computer network data communication controller for the Plutonium Protection System (PPS)

    International Nuclear Information System (INIS)

    Rogers, M.S.

    1978-10-01

    Systems which employ several computers for distributed processing must provide communication links between the computers to effectively utilize their capacity. The technique of using a central network controller to supervise and route messages on a multicomputer digital communications net has certain economic and performance advantages over alternative implementations. Conceptually, the number of stations (computers) which can be accommodated by such a controller is unlimited, but practical considerations dictate a maximum of about 12 to 15. A Data Network Controller (DNC) has been designed around a M6800 microprocessor for use in the Plutonium Protection System (PPS) demonstration facilities

  6. Operators manual for a computer controlled impedance measurement system

    Science.gov (United States)

    Gordon, J.

    1987-02-01

    Operating instructions of a computer controlled impedance measurement system based in Hewlett Packard instrumentation are given. Hardware details, program listings, flowcharts and a practical application are included.

  7. Distributed Smart Grid Asset Control Strategies for Providing Ancillary Services

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Wei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lian, Jianming [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Marinovici, Laurentiu D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Moya, Christian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dagle, Jeffery E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-30

    With large-scale plans to integrate renewable generation driven mainly by state-level renewable portfolio requirements, more resources will be needed to compensate for the uncertainty and variability associated with intermittent generation resources. Distributed assets can be used to mitigate the concerns associated with renewable energy resources and to keep costs down. Under such conditions, performing primary frequency control using only supply-side resources becomes not only prohibitively expensive but also technically difficult. It is therefore important to explore how a sufficient proportion of the loads could assume a routine role in primary frequency control to maintain the stability of the system at an acceptable cost. The main objective of this project is to develop a novel hierarchical distributed framework for frequency based load control. The framework involves two decision layers. The top decision layer determines the optimal gain for aggregated loads for each load bus. The gains are computed using decentralized robust control methods, and will be broadcast to the corresponding participating loads every control period. The second layer consists of a large number of heterogeneous devices, which switch probabilistically during contingencies so that aggregated power change matches the desired amount according to the most recently received gains. The simulation results show great potential to enable systematic design of demand-side primary frequency control with stability guarantees on the overall power system. The proposed design systematically accounts for the interactions between the total load response and bulk power system frequency dynamics. It also guarantees frequency stability under a wide range of time varying operating conditions. The local device-level load response rules fully respect the device constraints (such as temperature setpoint, compressor time delays of HVACs, or arrival and departure of the deferrable loads), which are crucial for

  8. Optimal dynamic control of resources in a distributed system

    Science.gov (United States)

    Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang

    1989-01-01

    The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.

  9. Distributed formation control for autonomous robots

    NARCIS (Netherlands)

    Garcia de Marina Peinado, Hector Jesús

    2016-01-01

    This thesis addresses several theoretical and practical problems related to formation-control of autonomous robots. Formation-control aims to simultaneously accomplish the tasks of forming a desired shape by the robots and controlling their coordinated collective motion. This kind of robot

  10. Distributed Computing on Gadgetron: A new paradigm for MRI reconstruction

    DEFF Research Database (Denmark)

    Xue, Hui; Kelmann, Peter; Inati, Souheil

    cloud computing. With this extension (named GT-Plus), any number of Gadgetron processes can run cooperatively across multiple computers. GT-Plus framework was deployed on Amazon EC2 cloud and NIH’s Biowulf system. We demonstrate that with the GT-Plus cloud, a multi-slice free-breathing myocardial cine...

  11. Computer control in a compton scattering spectrometer

    International Nuclear Information System (INIS)

    Cui Ningzhuo; Chen Tao; Gong Zhufang; Yang Baozhong; Mo Haiding; Hua Wei; Bian Zuhe

    1995-01-01

    The authors introduced the hardware and software of computer autocontrol of calibration and data acquisition in a Compton Scattering spectrometer which consists of a HPGe detector, Amplifiers and a MCA

  12. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  13. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  14. Agent-based distributed hierarchical control of dc microgrid systems

    DEFF Research Database (Denmark)

    Meng, Lexuan; Vasquez, Juan Carlos; Guerrero, Josep M.

    2014-01-01

    In order to enable distributed control and management for microgrids, this paper explores the application of information consensus and local decisionmaking methods formulating an agent based distributed hierarchical control system. A droop controlled paralleled DC/DC converter system is taken as ....... Standard genetic algorithm is applied in each local control system in order to search for a global optimum. Hardware-in-Loop simulation results are shown to demonstrate the effectiveness of the method.......In order to enable distributed control and management for microgrids, this paper explores the application of information consensus and local decisionmaking methods formulating an agent based distributed hierarchical control system. A droop controlled paralleled DC/DC converter system is taken...... as a case study. The objective is to enhance the system efficiency by finding the optimal sharing ratio of load current. Virtual resistances in local control systems are taken as decision variables. Consensus algorithms are applied for global information discovery and local control systems coordination...

  15. Experience with a high order programming language on the development of the Nova distributed control system

    International Nuclear Information System (INIS)

    Suski, G.J.; Holloway, F.W.; Duffy, J.M.

    1983-01-01

    This paper explores the impact of an HOL on the development of the distributed computer control system for Nova laser fusion facility. As the world's most powerful glass laser, Nova will generate 150 trillion watt pulses of infrared light focused onto fusion targets a few millimeters in diameter. It will perform experiments designed to explore the feasibility of fusion as an energy source of the future. Nova will utilize fifty microcomputers and four VAX-11/780's in a distributed process control computer system architecture

  16. Experience with a high order programming language on the development of the Nova distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    Suski, G.J.; Holloway, F.W.; Duffy, J.M.

    1983-05-10

    This paper explores the impact of an HOL on the development of the distributed computer control system for Nova laser fusion facility. As the world's most powerful glass laser, Nova will generate 150 trillion watt pulses of infrared light focused onto fusion targets a few millimeters in diameter. It will perform experiments designed to explore the feasibility of fusion as an energy source of the future. Nova will utilize fifty microcomputers and four VAX-11/780's in a distributed process control computer system architecture.

  17. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  18. Fault tolerant distributed real time computer systems for I and C of prototype fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2014-03-15

    Highlights: • Architecture of distributed real time computer system (DRTCS) used in I and C of PFBR is explained. • Fault tolerant (hot standby) architecture, fault detection and switch over are detailed. • Scaled down model was used to study functional and performance requirements of DRTCS. • Quality of service parameters for scaled down model was critically studied. - Abstract: Prototype fast breeder reactor (PFBR) is in the advanced stage of construction at Kalpakkam, India. Three-tier architecture is adopted for instrumentation and control (I and C) of PFBR wherein bottom tier consists of real time computer (RTC) systems, middle tier consists of process computers and top tier constitutes of display stations. These RTC systems are geographically distributed and networked together with process computers and display stations. Hot standby architecture comprising of dual redundant RTC systems with switch over logic system is deployed in order to achieve fault tolerance. Fault tolerant dual redundant network connectivity is provided in each RTC system and TCP/IP protocol is selected for network communication. In order to assess the performance of distributed RTC systems, scaled down model was developed with 9 representative systems and nearly 15% of I and C signals of PFBR were connected and monitored. Functional and performance testing were carried out for each RTC system and the fault tolerant characteristics were studied by creating various faults into the system and observed the performance. Various quality of service parameters like connection establishment delay, priority parameter, transit delay, throughput, residual error ratio, etc., are critically studied for the network.

  19. Computer simulation system of neural PID control on nuclear reactor

    International Nuclear Information System (INIS)

    Chen Yuzhong; Yang Kaijun; Shen Yongping

    2001-01-01

    Neural network proportional integral differential (PID) controller on nuclear reactor is designed, and the control process is simulated by computer. The simulation result show that neutral network PID controller can automatically adjust its parameter to ideal state, and good control result can be gotten in reactor control process

  20. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    Science.gov (United States)

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  1. Distributed Framework for Dynamic Telescope and Instrument Control

    Science.gov (United States)

    Ames, Troy J.; Case, Lynne

    2002-01-01

    Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have

  2. Control and Estimation of Distributed Parameter Systems

    CERN Document Server

    Kappel, F; Kunisch, K

    1998-01-01

    Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

  3. FIPA agent based network distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  4. FIPA agent based network distributed control system

    International Nuclear Information System (INIS)

    Abbott, D.; Gyurjyan, V.; Heyes, G.; Jastrzembski, E.; Timmer, C.; Wolin, E.

    2003-01-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed

  5. Smart Spectrometer for Distributed Fuzzy Control

    OpenAIRE

    Benoit, Eric; Foulloy, Laurent

    2009-01-01

    Document rédigé sous FrameMaker (pas sous Latex); International audience; If the main use of colour measurement is the metrology, it is now possible to find industrial control applications which uses this information. Using colour in process control leads to specific problems where human perception has to be replaced by colour sensors. This paper relies on the fuzzy representation of colours that can be taken into account by fuzzy controllers. If smart sensors already include intelligent func...

  6. Distributed control system for demand response by servers

    Science.gov (United States)

    Hall, Joseph Edward

    Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.

  7. Fourier coefficientes computation in two variables, a distributional version

    Directory of Open Access Journals (Sweden)

    Carlos Manuel Ulate R.

    2015-01-01

    Full Text Available The present article, by considering the distributional summations of Euler-Maclaurin and a suitable choice of the distribution, results in repre- sentations for the Fourier coefficients in two variables are obtained. These representations may be used for the numerical evaluation of coefficients.

  8. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  9. Fourier coefficientes computation in two variables, a distributional version

    OpenAIRE

    Carlos Manuel Ulate R.

    2015-01-01

    The present article, by considering the distributional summations of Euler-Maclaurin and a suitable choice of the distribution, results in repre- sentations for the Fourier coefficients in two variables are obtained. These representations may be used for the numerical evaluation of coefficients.

  10. Supervisory Control and Diagnostics System Distributed Operating System

    International Nuclear Information System (INIS)

    McGoldrick, P.R.

    1979-01-01

    This paper contains a description of the Supervisory Control and Diagnostics System (SCDS) Distributed Operating System. The SCDS consists of nine 32-bit minicomputers with shared memory. The system's main purpose is to control a large Mirror Fusion Test Facility

  11. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    Science.gov (United States)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  12. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    National Research Council Canada - National Science Library

    Eggleston, Robert G; McCreight, Katherine L

    2006-01-01

    .... In this report, we describe the foundations of a different type of computational architecture; one that we believe will be less susceptible to cognitive brittleness and can better scale to complex and ill-structured work domains...

  13. A Distributed Agent Architecture for a Computer Virus Immune System

    National Research Council Canada - National Science Library

    Harmer, Paul

    2000-01-01

    .... Information protection and information assurance are vital components required for achieving superiority in the Infosphere, but these goals are threatened by the exponential birth rate of new computer viruses...

  14. EBR-II high-ramp transients under computer control

    International Nuclear Information System (INIS)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.; Booty, W.F.; Dean, E.M.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients

  15. Computational intelligence applications in modeling and control

    CERN Document Server

    Vaidyanathan, Sundarapandian

    2015-01-01

    The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...

  16. A Novel Distributed Secondary Coordination Control Approach for Islanded Microgrids

    DEFF Research Database (Denmark)

    Lu, Xiaoqing; Yu, Xinghuo; Lai, Jingang

    2018-01-01

    This paper develops a new distributed secondary cooperative control scheme to coordinate distributed generators (DGs) in islanded microgrids (MGs). A finite time frequency regulation strategy containing a consensus-based distributed active power regulator is presented, which can not only guarantee...

  17. Isotopic analysis of plutonium by computer controlled mass spectrometry

    International Nuclear Information System (INIS)

    1974-01-01

    Isotopic analysis of plutonium chemically purified by ion exchange is achieved using a thermal ionization mass spectrometer. Data acquisition from and control of the instrument is done automatically with a dedicated system computer in real time with subsequent automatic data reduction and reporting. Separation of isotopes is achieved by varying the ion accelerating high voltage with accurate computer control

  18. Exact distributions of two-sample rank statistics and block rank statistics using computer algebra

    NARCIS (Netherlands)

    Wiel, van de M.A.

    1998-01-01

    We derive generating functions for various rank statistics and we use computer algebra to compute the exact null distribution of these statistics. We present various techniques for reducing time and memory space used by the computations. We use the results to write Mathematica notebooks for

  19. EPROM-based LSI-11 for distributed instrumentation control

    International Nuclear Information System (INIS)

    Hunt, D.N.

    1981-01-01

    The LLNL Nuclear Chemistry Counting Facility (NCCF) is being converted to a modern production facility. A computer network has been designed and built to implement this conversion. The outermost node of the computer network is a dedicated EPROM-based controller. The controller handles the details of driving the attached nuclear instrumentation, providing a standard interface to the remainder of the network. This paper addresses the design and the implementation of the dedicated instrumentation controller

  20. Distributed voltage control coordination between renewable generation plants in MV distribution grids

    DEFF Research Database (Denmark)

    Petersen, Lennart; Iov, Florin

    2017-01-01

    This study focuses on distributed voltage control coordination between renewable generation plants in medium-voltage distribution grids (DGs). A distributed offline coordination concept has been defined in a previous publication, leading to satisfactory voltage regulation in the DG. However, here...

  1. Secure cloud computing: benefits, risks and controls

    CSIR Research Space (South Africa)

    Carroll, M

    2011-08-01

    Full Text Available Cloud computing presents a new model for IT service delivery and it typically involves over-a-network, on-demand, self-service access, which is dynamically scalable and elastic, utilising pools of often virtualized resources. Through these features...

  2. Experiencing Brain-Computer Interface Control

    NARCIS (Netherlands)

    van de Laar, B.L.A.

    2016-01-01

    Brain-Computer Interfaces (BCIs) are systems that extract information from the user’s brain activity and employ it in some way in an interactive system. While historically BCIs were mainly catered towards paralyzed or otherwise physically handicapped users, the last couple of years applications with

  3. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Neely, Jason C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rashkin, Lee J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trudnowski, Daniel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, David G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with the optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.

  4. Distributed Flight Controls for UAVs, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Two novel flight control actuation concepts for UAV applications are proposed for research and development, both of which incorporate shape memory alloy (SMA) wires...

  5. Implementation of distributed computing system for emergency response and contaminant spill monitoring

    International Nuclear Information System (INIS)

    Ojo, T.O.; Sterling, M.C.Jr.; Bonner, J.S.; Fuller, C.B.; Kelly, F.; Page, C.A.

    2003-01-01

    The availability and use of real-time environmental data greatly enhances emergency response and spill monitoring in coastal and near shore environments. The data would include surface currents, wind speed, wind direction, and temperature. Model predictions (fate and transport) or forensics can also be included. In order to achieve an integrated system suitable for application in spill or emergency response situations, a link is required because this information exists on many different computing platforms. When real-time measurements are needed to monitor a spill, the use of a wide array of sensors and ship-based post-processing methods help reduce the latency in data transfer between field sampling stations and the Incident Command Centre. The common thread linking all these modules is the Transmission Control Protocol/Internet Protocol (TCP/IP), and the result is an integrated distributed computing system (DCS). The in-situ sensors are linked to an onboard computer through the use of a ship-based local area network (LAN) using a submersible device server. The onboard computer serves as both the data post-processor and communications server. It links the field sampling station with other modules, and is responsible for transferring data to the Incident Command Centre. This link is facilitated by a wide area network (WAN) based on wireless broadband communications facilities. This paper described the implementation of the DCS. The test results for the communications link and system readiness were also included. 6 refs., 2 tabs., 3 figs

  6. Formal Development and Verification of a Distributed Railway Control System

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan

    1999-01-01

    In this article we introduce the concept for a distributed railway control system and present the specification and verification of the main algorithm used for safe distributed control. Our design and verification approach is based on the RAISE method, starting with highly abstract algebraic...

  7. Formal Development and Verification of a Distributed Railway Control System

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan

    1998-01-01

    In this article we introduce the concept for a distributed railway control system and present the specification and verification of the main algorithm used for safe distributed control. Our design and verification approach is based on the RAISE method, starting with highly abstract algebraic spec...

  8. Facts controllers in power transmission and distribution

    CERN Document Server

    Padiyar, KR

    2007-01-01

    About the Book: The emerging technology of Flexible AC Transmission System (FACTS) enables planning and operation of power systems at minimum costs, without compromising security. This is based on modern high power electronic systems that provide fast controllability to ensure ''flexible'' operation under changing system conditions. This book presents a comprehensive treatment of the subject by discussing the operating principles, mathematical models, control design and issues that affect the applications. The concepts are explained often with illustrative examples and case studies. In partic

  9. The Role of Distributed Computing in Big Data Science: Case Studies in Forensics and Bioinformatics

    OpenAIRE

    Roscigno, Gianluca

    2016-01-01

    2014 - 2015 The era of Big Data is leading the generation of large amounts of data, which require storage and analysis capabilities that can be only ad- dressed by distributed computing systems. To facilitate large-scale distributed computing, many programming paradigms and frame- works have been proposed, such as MapReduce and Apache Hadoop, which transparently address some issues of distributed systems and hide most of their technical details. Hadoop is curren...

  10. Development of a computer control system for the RCNP ring cyclotron

    International Nuclear Information System (INIS)

    Ogata, H.; Yamazaki, T.; Ando, A.; Hosono, K.; Itahashi, T.; Katayama, I.; Kibayashi, M.; Kinjo, S.; Kondo, M.; Miura, I.; Nagayama, K.; Noro, T.; Saito, T.; Shimizu, A.; Uraki, M.; Maruyama, M.; Aoki, K.; Yamada, S.; Kodaira, K.

    1990-01-01

    A hierarchically distributed computer control system for the RCNP ring cyclotron is being developed. The control system consists of a central computer and four subcomputers which are linked together by an Ethernet, universal device controllers which control component devices, man-machine interfaces including an operator console and interlock systems. The universal device controller is a standard single-board computer with an 8344 microcontroller and parallel interfaces, and is usually integrated into a component device and connected to a subcomputer by means of an optical-fiber cable to achieve high-speed data transfer. Control sequences for subsystems are easily produced and improved by using an interpreter language named OPELA (OPEration Language for Accelerators). The control system will be installed in March 1990. (orig.)

  11. Cooperative Control of Distributed Autonomous Vehicles in Adversarial Environments

    Science.gov (United States)

    2006-08-14

    COOPERATIVE CONTROL OF DISTRIBUTED AUTONOMOUS VEHICLES IN ADVERSARIAL ENVIRONMENTS Grant #F49620–01–1–0361 Final Report Jeff Shamma Department of...CONTRACT NUMBER F49620-01-1-0361 5b. GRANT NUMBER 4. TITLE AND SUBTITLE COOPERATIVE CONTROL OF DISTRIBUTED AUTONOMOUS VEHICLES IN...single dominant language or a distribution of languages. A relation to multivehicle systems is understanding how highly autonomous vehicles on extended

  12. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  13. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    Science.gov (United States)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  14. A new generation drilling rig: hydraulically powered and computer controlled

    Energy Technology Data Exchange (ETDEWEB)

    Laurent, M.; Angman, P.; Oveson, D. [Tesco Corp., Calgary, AB, (Canada)

    1999-11-01

    Development, testing and operation of a new generation of hydraulically powered and computer controlled drilling rig that incorporates a number of features that enhance functionality and productivity, is described. The rig features modular construction, a large heated common drilling machinery room, permanently-mounted draw works which, along with the permanently installed top drive, significantly reduces rig-up/rig-down time. Also featured are closed and open hydraulic systems and a unique hydraulic distribution manifold. All functions are controlled through a programmable logic controller (PLC), providing almost unlimited interlocks and calculations to increase rig safety and efficiency. Simplified diagnostic routines, remote monitoring and troubleshooting are also part of the system. To date, two rigs are in operation. Performance of both rigs has been rated as `very good`. Little or no operational problems have been experienced; downtime has averaged 0.61 per cent since August 1998 when the the first of the two rigs went into operation. The most important future application for this rig is for use with the casing drilling process which eliminates the need for drill pipe and tripping. It also reduces the drilling time lost due to unscheduled events such as reaming, fishing and taking kicks while tripping. 1 tab., 6 figs.

  15. Control of Neutralization Process Using Soft Computing

    Directory of Open Access Journals (Sweden)

    G. Balasubramanian

    2008-03-01

    Full Text Available A novel model-based nonlinear control strategy is proposed using an experimental pH neutralization process. The control strategy involves a non linear neural network (NN model, in the context of internal model control (IMC. When integrated into the internal model control scheme, the resulting controller is shown to have favorable practical implications as well as superior performance. The designed model based online IMC controller was implemented to a laboratory scaled pH process in real time using dSPACE 1104 interface card. The responses of pH and acid flow rate shows good tracking for both the set point and load chances over the entire nonlinear region.

  16. Distribution state estimation based voltage control for distribution networks; Koordinierte Spannungsregelung anhand einer Zustandsschaetzung im Verteilnetz

    Energy Technology Data Exchange (ETDEWEB)

    Diwold, Konrad; Yan, Wei [Fraunhofer IWES, Kassel (Germany); Braun, Martin [Fraunhofer IWES, Kassel (Germany); Stuttgart Univ. (Germany). Inst. fuer Energieuebertragung und Hochspannungstechnik (IEH)

    2012-07-01

    The increased integration of distributed energy units creates challenges for the operators of distribution systems. This is due to the fact that distribution systems that were initially designed for distributed consumption and central generation now face decentralized feed-in. One imminent problem associated with decentralised fee-in are local voltage violations in the distribution system, which are hard to handle via conventional voltage control strategies. This article proposes a new voltage control framework for distribution system operation. The framework utilizes reactive power of distributed energy units as well on-load tap changers to mitigate voltage problems in the network. Using an optimization-band the control strategy can be used in situations where network information is derived from distribution state estimators and thus holds some error. The control capabilities in combination with a distribution state estimator are tested using data from a real rural distribution network. The results are very promising, as voltage control is achieved fast and accurate, preventing a majority of the voltage violations during system operation under realistic system conditions. (orig.)

  17. Applying improved instrumentation and computer control systems

    International Nuclear Information System (INIS)

    Bevilacqua, F.; Myers, J.E.

    1977-01-01

    In-core and out-of-core instrumentation systems for the Cherokee-I reactor are described. The reactor has 61m-core instrument assemblies. Continuous computer monitoring and processing of data from over 300 fixed detectors will be used to improve the manoeuvering of core power. The plant protection system is a standard package for the Combustion Engineering System 80, consisting of two independent systems, the reactor protection system and the engineering safety features activation system, both of which are designed to meet NRC, ANS and IEEE design criteria or standards. The plants protection system has its own computer which provides plant monitoring, alarming, logging and performance calculations. (U.K.)

  18. Distributed model based control of multi unit evaporation systems

    International Nuclear Information System (INIS)

    Yudi Samyudia

    2006-01-01

    In this paper, we present a new approach to the analysis and design of distributed control systems for multi-unit plants. The approach is established after treating the effect of recycled dynamics as a gap metric uncertainty from which a distributed controller can be designed sequentially for each unit to tackle the uncertainty. We then use a single effect multi-unit evaporation system to illustrate how the proposed method is used to analyze different control strategies and to systematically achieve a better closed-loop performance using a distributed model-based controller

  19. Rotational control of computer generated holograms.

    Science.gov (United States)

    Preece, Daryl; Rubinsztein-Dunlop, Halina

    2017-11-15

    We develop a basis for three-dimensional rotation of arbitrary light fields created by computer generated holograms. By adding an extra phase function into the kinoform, any light field or holographic image can be tilted in the focal plane with minimized distortion. We present two different approaches to rotate an arbitrary hologram: the Scheimpflug method and a novel coordinate transformation method. Experimental results are presented to demonstrate the validity of both proposed methods.

  20. A computational approach to discovering the functions of bacterial phytochromes by analysis of homolog distributions

    Directory of Open Access Journals (Sweden)

    Lamparter Tilman

    2006-03-01

    Full Text Available Abstract Background Phytochromes are photoreceptors, discovered in plants, that control a wide variety of developmental processes. They have also been found in bacteria and fungi, but for many species their biological role remains obscure. This work concentrates on the phytochrome system of Agrobacterium tumefaciens, a non-photosynthetic soil bacterium with two phytochromes. To identify proteins that might share common functions with phytochromes, a co-distribution analysis was performed on the basis of protein sequences from 138 bacteria. Results A database of protein sequences from 138 bacteria was generated. Each sequence was BLASTed against the entire database. The homolog distribution of each query protein was then compared with the homolog distribution of every other protein (target protein of the same species, and the target proteins were sorted according to their probability of co-distribution under random conditions. As query proteins, phytochromes from Agrobacterium tumefaciens, Pseudomonas aeruginosa, Deinococcus radiodurans and Synechocystis PCC 6803 were chosen along with several phytochrome-related proteins from A. tumefaciens. The Synechocystis photosynthesis protein D1 was selected as a control. In the D1 analyses, the ratio between photosynthesis-related proteins and those not related to photosynthesis among the top 150 in the co-distribution tables was > 3:1, showing that the method is appropriate for finding partner proteins with common functions. The co-distribution of phytochromes with other histidine kinases was remarkably high, although most co-distributed histidine kinases were not direct BLAST homologs of the query protein. This finding implies that phytochromes and other histidine kinases share common functions as parts of signalling networks. All phytochromes tested, with one exception, also revealed a remarkably high co-distribution with glutamate synthase and methionine synthase. This result implies a general role of