WorldWideScience

Sample records for networked unix computers

  1. Development of a UNIX network compatible reactivity computer

    International Nuclear Information System (INIS)

    Sanchez, R.F.; Edwards, R.M.

    1996-01-01

    A state-of-the-art UNIX network compatible controller and UNIX host workstation with MATLAB/SIMULINK software were used to develop, implement, and validate a digital reactivity calculation. An objective of the development was to determine why a Macintosh-based reactivity computer reactivity output drifted intolerably

  2. A secure file manager for UNIX

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure file manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.

  3. gLExec: gluing grid computing to the Unix world

    Science.gov (United States)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  4. gLExec: gluing grid computing to the Unix world

    International Nuclear Information System (INIS)

    Groep, D; Koeroo, O; Venekamp, G

    2008-01-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system

  5. CERN's common Unix and X terminal environment

    International Nuclear Information System (INIS)

    Cass, Tony

    1996-01-01

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to case the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed flesystem - currently Transarc's AFS - to enable essentially interchangeable client workstation to access both home directory and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture. (author)

  6. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  7. UNIX at high energy physics Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, Alan

    1994-03-15

    With more and more high energy physics Laboratories ''downsizing'' from large central proprietary mainframe computers towards distributed networks, usually involving UNIX operating systems, the need was expressed at the 1991 Computers in HEP (CHEP) Conference to create a group to consider the implications of this trend and perhaps work towards some common solutions to ease the transition for HEP users worldwide.

  8. The Newcastle connection: A software subsystem for constructing distributed UNIX systems

    International Nuclear Information System (INIS)

    Randell, B.

    1985-01-01

    The Newcastle connection is a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system. The techniques used are applicable to a variety and multiplicity of both local and wide area networks, and enable all issues of inter-processor communication, network protocols, etc., to be hidden. A brief account is given of experience with such distributed systems, the first of which was constructed in 1982 using a set of PDP11s running UNIX Version 7, and connected by a Cambridge Ring - since this date the Connection has been used to construct distributed systems based on various other computers and versions of UNIX, both at Newcastle and elsewhere. The final sections compare our scheme to various precursor schemes and discuss its potential relevance to other operating systems. (orig.)

  9. UNIX at high energy physics Laboratories

    International Nuclear Information System (INIS)

    Silverman, Alan

    1994-01-01

    With more and more high energy physics Laboratories ''downsizing'' from large central proprietary mainframe computers towards distributed networks, usually involving UNIX operating systems, the need was expressed at the 1991 Computers in HEP (CHEP) Conference to create a group to consider the implications of this trend and perhaps work towards some common solutions to ease the transition for HEP users worldwide

  10. Practical Unix and Internet Security

    CERN Document Server

    Garfinkel, Simson; Spafford, Gene

    2003-01-01

    When Practical Unix Security was first published more than a decade ago, it became an instant classic. Crammed with information about host security, it saved many a Unix system administrator from disaster. The second edition added much-needed Internet security coverage and doubled the size of the original volume. The third edition is a comprehensive update of this very popular book - a companion for the Unix/Linux system administrator who needs to secure his or her organization's system, networks, and web presence in an increasingly hostile world. Focusing on the four most popular Unix varia

  11. Computer networks and advanced communications

    International Nuclear Information System (INIS)

    Koederitz, W.L.; Macon, B.S.

    1992-01-01

    One of the major methods for getting the most productivity and benefits from computer usage is networking. However, for those who are contemplating a change from stand-alone computers to a network system, the investigation of actual networks in use presents a paradox: network systems can be highly productive and beneficial; at the same time, these networks can create many complex, frustrating problems. The issue becomes a question of whether the benefits of networking are worth the extra effort and cost. In response to this issue, the authors review in this paper the implementation and management of an actual network in the LSU Petroleum Engineering Department. The network, which has been in operation for four years, is large and diverse (50 computers, 2 sites, PC's, UNIX RISC workstations, etc.). The benefits, costs, and method of operation of this network will be described, and an effort will be made to objectively weigh these elements from the point of view of the average computer user

  12. Utilities in UNIX; Programas de Utilidad UNIX

    Energy Technology Data Exchange (ETDEWEB)

    Perez, L

    2002-07-01

    This manual goes to the users with some or much experience in the unix operating system. In such manner that they can get more efficiency using the unix of the most vendors. Include the majority of UNIX commands, shell built-in functions to create scripts, and a brief explication of the variables in several environments. In addition, other products are included, more and more integrated in the most of the unix operating systems. For example; the scanning and processing language awk, the print server LPRng, GNU utilities, batch subsystem, etc. The manual was initially based in an specific unix. But it had been written for use of the most unix that exist: Tru64 unix, aix, iris, hpux, solaris y linux. In this way, many examples in the chapter had been included. The purpose of this manual is to provide an UNIX reference for advanced users in any of the unix operating systems family. (Author)

  13. Computing and Network - Overview

    International Nuclear Information System (INIS)

    Jakubowski, Z.

    1999-01-01

    Full text: The responsibility of the Network Group covers: - providing central services like WWW, DNS (Domain Name Server), mail, etc.; - maintenance and support of the Local Area Networks,; - operation of the Wide Area Networks (LAN); - the support of the central UNIX servers and desktop workstations; - VAX/VMS cluster operation and support. The two-processor HP-UNIX K-200 and 6-processor SGI Challenge XL servers were delivering stable services to our users. Both servers were upgraded during the past year. SGI Challenge received additional 256 MB of memory. It was necessary in order to get all benefits of true 64-bit architecture of the SGI IRIX 6.2. The upgrade of our HP K-200 server were problematic so we decided to buy a new powerful machine and join the old and new machine via the fast network. Besides these main servers we have more than 30 workstations from IBM, DEC, HP, SGI and SUN. We observed a real race in PC technology in the past year. Intel processors deliver currently a performance that is comparable with HP or SUN workstations at very low costs. These CPU power is especially visible under Linux that is free Unix-like operating system. The clusters of cheap PC computers should be seriously considered in planning the computing power for the future experiments. The CPU power was further decentralized-smaller but powerful computers cover growing computing demands of our work-groups creating a small ''local computing centers''. The stable network and the concept of central services plays the essential role in this scenario. Unfortunately the network performance for the international communications is persistently unacceptable. We believe that attempts to join the European Quantum project is the only way to achieve the reasonable international network performance. In these plan polish scientific community will gain 34 Mbps international link. The growing costs of the ''real meetings'' give us no alternative to ''virtual meetings'' via the network in the

  14. Use of UNIX in large online processor farms

    Science.gov (United States)

    Biel, Joseph R.

    1990-08-01

    There has been a recent rapid increase in the power of RISC computers running the UNIX operating system. Fermilab has begun to make use of these computers in the next generation of offline computer farms. It is also planning to use such computers in online computer farms. Issues involved in constructing online UNIX farms are discussed.

  15. The equipment access software for a distributed UNIX-based accelerator control system

    International Nuclear Information System (INIS)

    Trofimov, Nikolai; Zelepoukine, Serguei; Zharkov, Eugeny; Charrue, Pierre; Gareyte, Claire; Poirier, Herve

    1994-01-01

    This paper presents a generic equipment access software package for a distributed control system using computers with UNIX or UNIX-like operating systems. The package consists of three main components, an application Equipment Access Library, Message Handler and Equipment Data Base. An application task, which may run in any computer in the network, sends requests to access equipment through Equipment Library calls. The basic request is in the form Equipment-Action-Data and is routed via a remote procedure call to the computer to which the given equipment is connected. In this computer the request is received by the Message Handler. According to the type of the equipment connection, the Message Handler either passes the request to the specific process software in the same computer or forwards it to a lower level network of equipment controllers using MIL1553B, GPIB, RS232 or BITBUS communication. The answer is then returned to the calling application. Descriptive information required for request routing and processing is stored in the real-time Equipment Data Base. The package has been written to be portable and is currently available on DEC Ultrix, LynxOS, HPUX, XENIX, OS-9 and Apollo domain. ((orig.))

  16. A unix configuration engine

    International Nuclear Information System (INIS)

    Burgess, M.

    1994-06-01

    A high level description language is presented for the purpose of automatically configuring large heterogeneous networked unix environments, based on class-oriented abstractions. The configuration engine is portable and easily extensible

  17. Unix Application Migration Guide

    CERN Document Server

    Microsoft. Redmond

    2003-01-01

    Drawing on the experience of Microsoft consultants working in the field, as well as external organizations that have migrated from UNIX to Microsoft® Windows®, this guide offers practical, prescriptive guidance on the issues you are likely to face when porting existing UNIX applications to the Windows operating system environment. Senior IT decision makers, network managers, and operations managers will get real-world guidance and best practices on planning and implementation issues to understand the different methods through which migration or co-existence can be accomplished. Also detailing

  18. Evaluation of Unix-Based Integrated Office Automation Products.

    Science.gov (United States)

    1994-04-01

    recipient preferences of networked UNIX users. An e-mail directory contains the preferred applications (e.g., FrameMaker , Excel) for each user, and e...Future Not Available (B) FrameMaker (UNIX) E-Optional I/E-Standard Future* (B) Interleaf (UNIX) I/E-Optional I/E-Standard Future* (B) IslandWrite Not...Optional Future Not Available DXF I-Optional Future Not Available (B) EPSI I-Standard Not Available Future (B) FrameMaker (MIF) E-Optional I/E-Standard Not

  19. Utilities in UNIX

    International Nuclear Information System (INIS)

    Perez, L.

    2002-01-01

    This manual goes to the users with some or much experience in the unix operating system. In such manner that they can get more efficiency using the unix of the most vendors. Include the majority of UNIX commands, shell built-in functions to create scripts, and a brief explication of the variables in several environments. In addition, other products are included, more and more integrated in the most of the unix operating systems. For example: the scanning and processing language awk, the print server LPRng, GNU Utilities, batch subsystem, etc. The manual was initially based in an specific unix. But it and been written for use of the most unix that exist: Tru64 unix, aix, iris, hpux. solaris y linux. In this way, many examples in the chapter had been included. The purpose of this manual is to provide an UNIX reference for advanced users in any of the unix operating systems family. (Author)

  20. Protection against hostile algorithms in UNIX software

    Science.gov (United States)

    Radatti, Peter V.

    1996-03-01

    Protection against hostile algorithms contained in Unix software is a growing concern without easy answers. Traditional methods used against similar attacks in other operating system environments such as MS-DOS or Macintosh are insufficient in the more complex environment provided by Unix. Additionally, Unix provides a special and significant problem in this regard due to its open and heterogeneous nature. These problems are expected to become both more common and pronounced as 32 bit multiprocess network operating systems become popular. Therefore, the problems experienced today are a good indicator of the problems and the solutions that will be experienced in the future, no matter which operating system becomes predominate.

  1. Understanding Unix/Linux programming a guide to theory and practice

    CERN Document Server

    Molay, Bruce

    2003-01-01

    This book explains in a clear and coherent manner how Unix works, how to understand existing Unix programs, and how to design and create new Unix programs. The book is organized by subsystem, each presented in visual terms and explained using vivid metaphors. It breaks the information into manageable parts that can be presented, explained, and mastered. By using case studies and an extremely reader-friendly manner to illustrate complex ideas and concepts, the book covers the basics of systems programming, users, files and manuals, how to read a directory, using 1S, writing PWD, studying STTY, writing a video game, studying SH, environment and shell variables, I/O redirection and pipes, servers and sockets, writing a web server, license servers, and concurrent functions. For Unix system administrators and programmers, network programmers, and others who have used other operating systems and need to learn Unix programming to expand their skill sets.

  2. Migration of the UNIX Application for eFAST CANDU Nuclear Power Plant Analyzer

    International Nuclear Information System (INIS)

    Suh, Jae Seung; Sohn, Dae Seong; Kim, Sang Jae; Jeun, Gyoo Dong

    2006-01-01

    Since the mid 1980s, corporate data centers have been moving away from mainframes running dedicated operating systems to mini-computers, often using one or other of the myriad flavors of UNIX. At the same time, the users' experience of these systems has, in many cases, stayed the same, involving text-based interaction with dumb terminals or a terminal-emulation session on a Personal Computer. More recently, IT managers have questioned this approach, and have been looking at changes in the UNIX marketplace and the increasing expense of being tied in to single-vendor software and hardware solutions. The growth of Linux as a lightweight version of UNIX has fueled this interest, raising the number of organizations that are considering a migration to alternative platforms. The various implementations of the UNIX operating system have served industry well, as witnessed by the very large base both of installed systems and large-scale applications installed on those systems. However, there are increasing signs of dissatisfaction with expensive, often proprietary solutions and a growing sense that perhaps the concept of 'big iron' has had its day in the same way as it has for most of the mainframes of the type portrayed in 1970s science fiction films. One of the most extraordinary and unexpected successes of the Intel PC architecture is the extent to which this basic framework has been extended to encompass very large server and data center environments. Large-scale hosting companies are now offering enterprise level services to multiple client companies at availability levels of over 99.99 percent on what are simply racks of relatively cheap PCs. Technologies such as clustering, Network Load Balancing, and Component Load Balancing enable the personal computer to take on and match the levels of throughput, availability, and reliability of all but the most expensive 'big iron' solutions and the supercomputers

  3. Python for Unix and Linux system administration

    CERN Document Server

    Gift, Noah

    2007-01-01

    Python is an ideal language for solving problems, especially in Linux and Unix networks. With this pragmatic book, administrators can review various tasks that often occur in the management of these systems, and learn how Python can provide a more efficient and less painful way to handle them. Each chapter in Python for Unix and Linux System Administration presents a particular administrative issue, such as concurrency or data backup, and presents Python solutions through hands-on examples. Once you finish this book, you'll be able to develop your own set of command-line utilities with Pytho

  4. Integrating UNIX workstation into existing online data acquisition systems for Fermilab experiments

    International Nuclear Information System (INIS)

    Oleynik, G.

    1991-03-01

    With the availability of cost effective computing prior from multiple vendors of UNIX workstations, experiments at Fermilab are adding such computers to their VMS based online data acquisition systems. In anticipation of this trend, we have extended the software products available in our widely used VAXONLINE and PANDA data acquisition software systems, to provide support for integrating these workstations into existing distributed online systems. The software packages we are providing pave the way for the smooth migration of applications from the current Data Acquisition Host and Monitoring computers running the VMS operating systems, to UNIX based computers of various flavors. We report on software for Online Event Distribution from VAXONLINE and PANDA, integration of Message Reporting Facilities, and a framework under UNIX for experiments to monitor and view the raw event data produced at any level in their DA system. We have developed software that allows host UNIX computers to communicate with intelligent front-end embedded read-out controllers and processor boards running the pSOS operating system. Both RS-232 and Ethernet control paths are supported. This enables calibration and hardware monitoring applications to be migrated to these platforms. 6 refs., 5 figs

  5. Unix version of CALOR89 for calorimeter applications

    International Nuclear Information System (INIS)

    Handler, T.

    1992-01-01

    CALOR89 is a system of coupled Monte Carlo particle transport computer codes which has been successfully employed for the estimation of calorimeter parameters in High Energy Physics. In the past CALOR89 has been running on various IBM machines and on CRAY X-MP at Lawrence Livermore Lab. These machines had non-unix operating systems. In this report we present a UNIX version of CALOR89, which is especially suited for the UNIX work stations. Moreover CALOR89 is also been supplemented with two new program packages which makes it more user friendly. CALPREP is a program for the preparation of the input files for CALOR89 in general geometry and ANALYZ is an analysis package to extract the final results from CALOR89 relevant to calorimeters. This report also provides two script files LCALOR and PCALOR. LCALOR runs CALOR89 sequences of programs and EGS4 for a given configuration sequentially on a single processor and PCALOR concurrently on a multiprocessor unix workstation

  6. A small Unix-based data acquisition system

    International Nuclear Information System (INIS)

    Engberg, D.; Glanzman, T.

    1993-06-01

    The proposed SLAC B Factory detector plans to use Unix-based machines for all aspects of computing, including real-time data acquisition and experimental control. An R ampersand D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R ampersand D work investigated the basic real-time aspects of the IBM RS/6000 workstation running AIX. The next step in this R ampersand D is the construction of prototype data acquisition system which attempts to exercise many of the features needed in the final on-line system in a realistic situation. For this project, we have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber

  7. Virtual network computing: cross-platform remote display and collaboration software.

    Science.gov (United States)

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  8. Learning Unix for Mac OS X Tiger Unlock the Power of Unix

    CERN Document Server

    Taylor, Dave

    2005-01-01

    Thoroughly revised and updated for Mac OS X Tiger, this new edition introduces Mac users to the Terminal application and shows you how to navigate the command interface, explore hundreds of Unix applications that come with the Mac, and, most importantly, how to take advantage of both the Mac and Unix interfaces. If you want to master the command-line, this gentle guide to using Unix on Mac OS X Tiger is well worth its cover price

  9. The Fermi Unix Environment - Dealing with Adolescence

    Science.gov (United States)

    Pordes, Ruth; Nicholls, Judy; Wicks, Matt

    Fermilab's Computing Division started early in the definition implemention and promulgation of a common environment for Users across the Laboratory's UNIX platforms and installations. Based on our experience over nearly five years, we discuss the status of the effort ongoing developments and needs, some analysis of where we could have done better, and identify future directions to allow us to provide better and more complete service to our customers. In particular, with the power of the new PCs making enthusiastic converts of physicists to the pc world, we are faced with the challenge of expanding the paradigm to non-UNIX platforms in a uniform and consistent way.

  10. Implementation of a control system test environment in UNIX

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Rovere, L.A.

    1990-01-01

    This paper discusses how UNIX features such as shared memory, remote procedure calls, and signalling have been used to implement a distributed computational environment ideal for the development and testing of digital control systems. The resulting environment -based on features commonly available in commercial workstations- is flexible, allows process simulation and controller development to proceed in parallel, and provides for testing and validation in a realistic environment. In addition, the use of shared memory to exchange data allows other tasks such as user interfaces and recorders to be added without affecting the process simulation or controllers. A library of functions is presented which provides a simple interface to using the features described. These functions can be used in either C or FORTRAN programs and have been tested on a network of Sun workstations and an ENCORE parallel computer. 6 refs., 2 figs

  11. Multimedia Synchronization and UNIX-or-If Multimedia Support is the Problem, Is UNIX the Solution?

    NARCIS (Netherlands)

    D.C.A. Bulterman (Dick); G. van Rossum (Guido); D.T. Winter (Dik)

    1991-01-01

    htmlabstractThis paper considers the role of UNIX in supporting multimedia applications. In particular, we consider the ability of the UNIX operating system (in general) and the UNIX I/O system (in particular) to support the synchronization of a number of high-bandwidth data sets that must be

  12. Fermi UNIX trademark environment

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    The introduction of UNIX at Fermilab involves multiple platforms and multiple vendors. Additionally, a single user may have to use more than one platform. This heterogeneity and multiplicity makes it necessary to define a Fermilab environment for UNIX so that as much as possible the systems ''look and feel'' the same. We describe our environment, including both the commercial products and the local tools used to support it. Other products designed for the UNIX environment are also described. 19 refs

  13. Mac OS X Tiger for Unix Geeks

    CERN Document Server

    Jepson, Brian

    2005-01-01

    If you're one of the many Unix developers drawn to Mac OS X for its Unix core, you'll find yourself in surprisingly unfamiliar territory. Unix and Mac OS X are kissing cousins, but there are enough pitfalls and minefields in going from one to another that even a Unix guru can stumble, and most guides to Mac OS X are written for Mac aficionados. For a Unix developer, approaching Tiger from the Mac side is a bit like learning Russian by reading the Russian side of a Russian-English dictionary. Fortunately, O'Reilly has been the Unix authority for over 25 years, and in Mac OS X Tiger for Unix Gee

  14. Mac OS X for Unix Geeks (Leopard)

    CERN Document Server

    Rothman, Ernest E; Rosen, Rich

    2009-01-01

    If you've been lured to Mac OS X because of its Unix roots, this invaluable book serves as a bridge between Apple's Darwin OS and the more traditional Unix systems. The new edition offers a complete tour of Mac OS X's Unix shell for Leopard and Tiger, and helps you find the facilities that replace or correspond to standard Unix utilities. Learn how to compile code, link to libraries, and port Unix software to Mac OS X and much more with this concise guide.

  15. Improved operating scenarios of the DIII-D tokamak as a result of the addition of UNIX computer systems

    International Nuclear Information System (INIS)

    Henline, P.A.

    1995-10-01

    The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape control due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described

  16. Progress of data processing system in JT-60 utilizing the UNIX-based workstations

    International Nuclear Information System (INIS)

    Sakata, Shinya; Kiyono, Kimihiro; Oshima, Takayuki; Sato, Minoru; Ozeki, Takahisa

    2007-07-01

    JT-60 data processing system (DPS) possesses three-level hierarchy. At the top level of hierarchy is JT-60 inter-shot processor (MSP-ISP), which is a mainframe computer, provides communication with the JT-60 supervisory control system and supervises the internal communication inside the DPS. The middle level of hierarchy has minicomputers and the bottom level of hierarchy has individual diagnostic subsystems, which consist of the CAMAC and VME modules. To meet the demand for advanced diagnostics, the DPS has been progressed in stages from a three-level hierarchy system, which was dependent on the processing power of the MSP-ISP, to a two-level hierarchy system, which is decentralized data processing system (New-DPS) by utilizing the UNIX-based workstations and network technology. This replacement had been accomplished, and the New-DPS has been started to operate in October 2005. In this report, we describe the development and improvement of the New-DPS, whose functions were decentralized from the MSP-ISP to the UNIX-based workstations. (author)

  17. San Onofre 2/3 simulator: The move from Unix to Windows

    International Nuclear Information System (INIS)

    Paquette, C.; Desouky, C.; Gagnon, V.

    2006-01-01

    CAE has been developing nuclear power plant (NPP) simulators for over 30 years for customers around the world. While numerous operating systems are used today for simulators, many of the existing simulators were developed to run on workstation-type computers using a variant of the Unix operating system. Today, thanks to the advances in the power and capabilities of Personal Computers (PC's), and because most simulators will eventually need to be upgraded, more and more of these RISC processor-based simulators will be converted to PC-based platforms running either the Windows or Linux operating systems. CAE's multi-platform simulation environment runs on the UNIX Linux and Windows operating systems, enabling simulators to be 'open' and highly interoperable systems using industry-standard software components and methods. The result is simulators that are easier to maintain and modify as reference plants evolve. In early January 2003, CAE set out to upgrade Southern California Edison's San Onofre Unit 2/3 UNIX-based simulator with its latest integrated simulation environment. This environment includes CAE's instructor station Isis, the latest ROSE modeling and runtime tool, as well as the deployment of a new reactor kinetics model (COMET) and new nuclear steam supply system (ANTHEM2000). The chosen simulation platform is PC-based and runs the Windows XP operating system. The main features and achievements of the San Onofre 2/3 Simulator's modernization from RISC/Unix to Intel/Windows XP, running CAE's current simulation environment, is the subject of this paper. (author)

  18. HUMPF [Heterogeneous Unix Montecarlo Production Facility] users guide

    International Nuclear Information System (INIS)

    Cahill, P.; Edgecock, R.; Fisher, S.M.; Gee, C.N.P.; Gordon, J.C.; Kidd, T.; Leake, J.; Rigby, D.J.; Roberts, J.H.C.

    1992-11-01

    The Heterogenous Unix Monte Carlo Production Facility (HUMPF) simplifies the running of particle physics simulation programs on Unix workstations. Monte Carlo is the largest consumer of IBM (CPU) capacity within the Atlas centre at Rutherford Appleton Laboratory (RAL). It is likely that the future computing requirements of the LEP and HERA experiments cannot be satisfied by the IBM 3090 system. HUMPF adds extra capacity, and can be expanded with minimal effort. Monte Carlo programs are CPU-bound, and make little use of the vector or the input/output capacity of the IBM 3090. Such programs are therefore excellent candidates to use the spare capacity of powerful workstations. The main data storage is still handled centrally by the IBM 3090 and its peripherals. The HUMPF facility is suitable for any program with a similar profile. (author)

  19. Interface and integration of a silicon graphics UNIX computer with the Encore based SCE SONGS 2/3 simulator

    International Nuclear Information System (INIS)

    Olmos, J.; Lio, P.; Chan, K.S.

    1991-01-01

    The SONGS Unit 2/3 simulator was originally implemented in 1983 on a Master/Slave 32/7780 Encore MPX platform by the Singer-Link Company. In 1986, a 32/9780 MPX Encore computer was incorporated into the simulator computer system to provide the additional CPU processing needed to install the PACE plant monitoring system and to enable the upgrade of the NSSS Simulation to the advanced RETACT/STK models. Since the spring of 1990, the SCE SONGS Nuclear Training Division simulator technical staff, in cooperation with Micro Simulation Inc., has undertaken a project to integrate a Silicon Graphics UNIX based computer with the Encore MPX SONGS 2/3 simulation computer system. In this paper the authors review the objectives, advantages to be gained, software and hardware approaches utilized, and the results so far achieved by the authors' project

  20. UNIX-a solution to the compatibility problem

    International Nuclear Information System (INIS)

    Gulbranson, R.L.

    1983-01-01

    The UNIX operating system (TM Bell Laboratories) has achieved a high degree of popularity in recent years. It is rapidly becoming a defacto standard as the operating system for 16 and 32 bit microcomputers. The adoption of this operating system by the physics community offers several substantial advantages; a portable software environment (editors, file system, etc.), freedom to choose among a variety of higher-level languages for software applications, and computer hardware vendor independence

  1. UNIX by examples

    International Nuclear Information System (INIS)

    Nee, F.

    1990-10-01

    This report discusses the following topics on UNIX basis programming: file structure; frequently used commands/utilities; control structures used in shell script; c-shell programming; and bourne shell programming

  2. Work with Apple's Rhapsody Operating System which Allows Simultaneous UNIX Program Development, UNIX Program Execution, and PC Application Execution

    OpenAIRE

    Summers, Don; Riley, Chris; Cremaldi, Lucien; Sanders, David

    2001-01-01

    Over the past decade, UNIX workstations have provided a very powerful program development environment. However, workstations are more expensive than PCs and Macintoshes and require a system manager for day-to-day tasks such as disk backup, adding users, and setting up print queues. Native commercial software for system maintenance and "PC applications" has been lacking under UNIX. Apple's new Rhapsody operating system puts the current MacOS on a NeXT UNIX foundation and adds an enhanced NeXTS...

  3. COMPUTING SERVICES DURING THE ANNUAL CERN SHUTDOWN

    CERN Multimedia

    2001-01-01

    As in previous years, computing services run by IT division will be left running unattended during the annual shutdown. The following points should be noted. No interruptions are scheduled for local and wide area networking and the ACB, e-mail and unix interactive services. Unix batch services will be available but without access to manually mounted tapes. Dedicated Engineering services, general purpose database services and the Helpdesk will be closed during this period. An operator service will be maintained and can be reached at extension 75011 or by Email to computer.operations@cern.ch. Users should be aware that, except where there are special arrangements, any major problems that develop during this period will most likely be resolved only after CERN has reopened. In particular, we cannot guarantee backups for Home Directory files (for Unix or Windows) or for email folders. Any changes that you make to your files during this period may be lost in the event of a disk failure. Please note that all t...

  4. Colgate one of first to build global computing grid

    CERN Multimedia

    Magno, L

    2003-01-01

    "Colgate-Palmolive Co. has become one of the first organizations in the world to build an enterprise network based on the grid computing concept. Since mid-August, the consumer products firm has been working to connect approximately 50 geographically dispersed Unix servers and storage devices in an enterprise grid network" (1 page).

  5. Developments of the general computer network of NIPNE-HH

    International Nuclear Information System (INIS)

    Mirica, M.; Constantinescu, S.; Danet, A.

    1997-01-01

    Since 1991 the general computer network of NIPNE-HH was developed and connected to RNCN (Romanian National Computer Network) for research and development and it offers to the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNCN is targeted on the following main objectives: Setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - Providing a rapid and competitive tool for the exchange of information in the framework of Research and Development (R-D) community; - Using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - Providing a support for information, scientific and technical co-operation. RNCN has two international links: to EBONE via ACONET (64kbps) and to EuropaNET via Hungarnet (64 kbps). The guiding principle in designing the project of general computer network of NIPNE-HH, as part of RNCN, was to implement an open system based on OSI standards taking into account the following criteria: - development of a flexible solution, according to OSI specifications; - solutions of reliable gateway with the existing network already in use,allowing the access to the worldwide networks; - using the TCP/IP transport protocol for each Local Area Network (LAN) and for the connection to RNCN; - ensuring the integration of different and heterogeneous software and hardware platforms (DOS, Windows, UNIX, VMS, Linux, etc) through some specific interfaces. The major objectives achieved in direction of developing the general computer network of NIPNE-HH are: - linking all the existing and newly installed computer equipment and providing an adequate connectivity. LANs from departments

  6. GEMPAK 5.1 - A GENERAL METEOROLOGICAL PACKAGE (UNIX VERSION)

    Science.gov (United States)

    Desjardins, M. L.

    1994-01-01

    GEMPAK is a general meteorological software package developed at NASA/Goddard Space Flight Center. It includes programs to analyze and display surface, upper-air, and gridded data, including model output. There are very general programs to list, edit, and plot data on maps, to display profiles and time series, to draw and fill contours, to draw streamlines, to plot symbols for clouds, sky cover, and pressure tendency, and draw cross sections in the case of gridded data and sounding data. In addition, there are Barnes objective analysis programs to grid surface and upper-air data. The programs include the capabilities to derive meteorological parameters from those found in the dataset, to perform vertical interpolations of sounding data to different coordinate systems, and to compute an extensive set of gridded diagnostic quantities by specifying various nested combinations of scalars and vector arithmetic, algebraic, and differential operators. The GEMPAK 5.1 graphics/transformation subsystem, GEMPLT, provides device-independent graphics. GEMPLT also has the capability to display output in a variety of map projections or overlaid on satellite imagery. GEMPAK 5.1 is written in FORTRAN 77 and C-language and has been implemented on VAX computers under VMS and on computers running the UNIX operating system. During installation and normal use, this package occupies approximately 100Mb of hard disk space. The UNIX version of GEMPAK includes drivers for several graphic output systems including MIT's X Window System (X11,R4), Sun GKS, PostScript (color and monochrome), Silicon Graphics, and others. The VMS version of GEMPAK also includes drivers for several graphic output systems including PostScript (color and monochrome). The VMS version is delivered with the object code for the Transportable Applications Environment (TAE) program, version 4.1 which serves as a user interface. A color monitor is recommended for displaying maps on video display devices. Data for rendering

  7. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    Science.gov (United States)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  8. Addressing the Digital Divide in Contemporary Biology: Lessons from Teaching UNIX.

    Science.gov (United States)

    Mangul, Serghei; Martin, Lana S; Hoffmann, Alexander; Pellegrini, Matteo; Eskin, Eleazar

    2017-10-01

    Life and medical science researchers increasingly rely on applications that lack a graphical interface. Scientists who are not trained in computer science face an enormous challenge analyzing high-throughput data. We present a training model for use of command-line tools when the learner has little to no prior knowledge of UNIX. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Real-time UNIX in HEP data acquisition

    International Nuclear Information System (INIS)

    Buono, S.; Gaponenko, I.; Jones, R.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P.Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Aguer, M.; Huet, M.

    1994-01-01

    Today's experimentation in high energy physics is characterized by an increasing need for sensitivity to rare phenomena and complex physics signatures, which require the use of huge and sophisticated detectors and consequently a high performance readout and data acquisition. Multi-level triggering, hierarchical data collection and an always increasing amount of processing power, distributed throughout the data acquisition layers, will impose a number of features on the software environment, especially the need for a high level of standardization. Real-time UNIX seems, today, the best solution for the platform independence, operating system interface standards and real-time features necessary for data acquisition in HEP experiments. We present the results of the evaluation, in a realistic application environment, of a Real-Time UNIX operating system: the EP/LX real-time UNIX system. ((orig.))

  10. Unix Security Cookbook

    Science.gov (United States)

    Rehan, S. C.

    This document has been written to help Site Managers secure their Unix hosts from being compromised by hackers. I have given brief introductions to the security tools along with downloading, configuring and running information. I have also included a section on my recommendations for installing these security tools starting from an absolute minimum security requirement.

  11. Overview of the DIII-D program computer systems

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1997-11-01

    Computer systems pervade every aspect of the DIII-D National Fusion Research program. This includes real-time systems acquiring experimental data from data acquisition hardware; cpu server systems performing short term and long term data analysis; desktop activities such as word processing, spreadsheets, and scientific paper publication; and systems providing mechanisms for remote collaboration. The DIII-D network ties all of these systems together and connects to the ESNET wide area network. This paper will give an overview of these systems, including their purposes and functionality and how they connect to other systems. Computer systems include seven different types of UNIX systems (HP-UX, REALIX, SunOS, Solaris, Digital UNIX, Ultrix, and IRIX), OpenVMS systems (both BAX and Alpha), MACintosh, Windows 95, and more recently Windows NT systems. Most of the network internally is ethernet with some use of FDDI. A T3 link connects to ESNET and thus to the Internet. Recent upgrades to the network have notably improved its efficiency, but the demand for bandwidth is ever increasing. By means of software and mechanisms still in development, computer systems at remote sites are playing an increasing role both in accessing and analyzing data and even participating in certain controlling aspects for the experiment. The advent of audio/video over the interest is now presenting a new means for remote sites to participate in the DIII-D program

  12. FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (UNIX VERSION)

    Science.gov (United States)

    Newman, J. C.

    1994-01-01

    loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare

  13. 1990 CERN School of Computing

    International Nuclear Information System (INIS)

    1991-01-01

    These Proceedings contain written versions of lectures delivered at the 1990 CERN School of Computing, covering a variety of topics. Computer networks are treated in three papers: standards in computer networking; evolution of local and metropolitan area networks; asynchronous transfer mode, the solution for broadband ISDN. Data acquisition and analysis are the topic of papers on: data acquisition using MODEL software; graphical event analysis. Two papers in the field of signal processing treat digital image processing and the use of digital signal processors in HEP. Another paper reviews the present state of digital optical computing. Operating systems and programming discipline are covered in two papers: UNIX, evolution towards distributed systems; new developments in program verification. Three papers treat miscellaneous topics: computer security within the CERN environment; numerical simulation in fluid mechanics; fractals. An introduction to transputers and Occam gives an account of the tutorial lectures given at the School. (orig.)

  14. Script-viruses Attacks on UNIX OS

    Directory of Open Access Journals (Sweden)

    D. M. Mikhaylov

    2010-06-01

    Full Text Available In this article attacks on UNIX OS are considered. Currently antivirus developers are concentrated on protecting systems from viruses that are most common and attack popular operating systems. If the system or its components are not often attacked then the antivirus products are not protecting these components as it is not profitable. The same situation is with script-viruses for UNIX OS as most experts consider that it is impossible for such viruses to get enough rights to attack. Nevertheless the main conclusion of this article is the fact that such viruses can be very powerful and can attack systems and get enough rights.

  15. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  16. UNIX trademark in high energy physics: What we can learn from the initial experiences at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Butler, J.N.

    1991-03-01

    The reasons why Fermilab decided to support the UNIX operating system are reviewed and placed in the content of an overall model for high energy physics data analysis. The strengths and deficiencies of the UNIX environment for high energy physics are discussed. Fermilab's early experience in dealing with a an open'' multivendor environment, both for computers and for peripherals, is described. The human resources required to fully exploit the opportunities are clearly growing. The possibility of keeping the development and support efforts within reasonable bounds may depend on our ability to collaborate or at least to share information even more effectively than we have in the past. 7 refs., 4 figs., 5 tabs.

  17. UNIX trademark in high energy physics: What we can learn from the initial experiences at Fermilab

    International Nuclear Information System (INIS)

    Butler, J.N.

    1991-03-01

    The reasons why Fermilab decided to support the UNIX operating system are reviewed and placed in the content of an overall model for high energy physics data analysis. The strengths and deficiencies of the UNIX environment for high energy physics are discussed. Fermilab's early experience in dealing with a an ''open'' multivendor environment, both for computers and for peripherals, is described. The human resources required to fully exploit the opportunities are clearly growing. The possibility of keeping the development and support efforts within reasonable bounds may depend on our ability to collaborate or at least to share information even more effectively than we have in the past. 7 refs., 4 figs., 5 tabs

  18. DET/MPS - THE GSFC ENERGY BALANCE PROGRAM, DIRECT ENERGY TRANSFER/MULTIMISSION SPACECRAFT MODULAR POWER SYSTEM (UNIX VERSION)

    Science.gov (United States)

    Jagielski, J. M.

    1994-01-01

    The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a

  19. Doing accelerator physics using SDDS, UNIX, and EPICS

    International Nuclear Information System (INIS)

    Borland, M.; Emery, L.; Sereno, N.

    1995-01-01

    The use of the SDDS (Self-Describing Data Sets) file protocol, together with the UNIX operating system and EPICS (Experimental Physics and Industrial Controls System), has proved powerful during the commissioning of the APS (Advanced Photon Source) accelerator complex. The SDDS file protocol has permitted a tool-oriented approach to developing applications, wherein generic programs axe written that function as part of multiple applications. While EPICS-specific tools were written for data collection, automated experiment execution, closed-loop control, and so forth, data processing and display axe done with the SDDS Toolkit. Experiments and data reduction axe implemented as UNIX shell scripts that coordinate the execution of EPICS specific tools and SDDS tools. Because of the power and generic nature of the individual tools and of the UNIX shell environment, automated experiments can be prepared and executed rapidly in response to unanticipated needs or new ideas. Examples are given of application of this methodology to beam motion characterization, beam-position-monitor offset measurements, and klystron characterization

  20. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    International Nuclear Information System (INIS)

    Mcguckin, Theodore

    2008-01-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machine was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period

  1. COMPUTING SERVICES DURING THE ANNUAL CERN SHUTDOWN

    CERN Multimedia

    2000-01-01

    As in previous years, computing services run by IT division will be left running unattended during the annual shutdown. The following points should be noted. No interruptions are scheduled for local and wide area networking and the ACB, e-mail and unix interactive services. Maintenance work is scheduled for the NICE home directory servers and the central Web servers. Users must, therefore, expect service interruptions. Unix batch services will be available but without access to HPSS or to manually mounted tapes. Dedicated Engineering services, general purpose database services and the Helpdesk will be closed during this period. An operator service will be maintained and can be reached at extension 75011 or by email to: computer.operations@cern.ch Users should be aware that, except where there are special arrangements, any major problems that develop during this period will most likely be resolved only after CERN has reopened. In particular, we cannot guarantee backups for Home Directory files for eithe...

  2. Real-time on a standard UNIX workstation?

    International Nuclear Information System (INIS)

    Glanzman, T.

    1992-09-01

    This is a report of an ongoing R ampersand D project which is investigating the use of standard UNIX workstations for the real-time data acquisition from a major new experimental initiative, the SLAC B Factory (PEP II). For this work an IBM RS/6000 workstation running the AIX operating system is used. Real-time extensions to the UNIX operating system are explored and performance measured. These extensions comprise a set of AIX-specific and POSIX-compliant system services. Benchmark comparisons are made with embedded processor technologies. Results are presented for a simple prototype on-line system for laboratory-testing of a new prototype drift chamber

  3. Unix Domain Sockets Applied in Android Malware Should Not Be Ignored

    Directory of Open Access Journals (Sweden)

    Xu Jiang

    2018-03-01

    Full Text Available Increasingly, malicious Android apps use various methods to steal private user data without their knowledge. Detecting the leakage of private data is the focus of mobile information security. An initial investigation found that none of the existing security analysis systems can track the flow of information through Unix domain sockets to detect the leakage of private data through such sockets, which can result in zero-day exploits in the information security field. In this paper, we conduct the first systematic study on Unix domain sockets as applied in Android apps. Then, we identify scenarios in which such apps can leak private data through Unix domain sockets, which the existing dynamic taint analysis systems do not catch. Based on these insights, we propose and implement JDroid, a taint analysis system that can track information flows through Unix domain sockets effectively to detect such privacy leaks.

  4. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    Science.gov (United States)

    Spirkovska, L.

    1994-01-01

    first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

  5. UNIX-based operating systems robustness evaluation

    Science.gov (United States)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  6. Introduction of the UNIX International Performance Management Work Group

    Science.gov (United States)

    Newman, Henry

    1993-01-01

    In this paper we presented the planned direction of the UNIX International Performance Management Work Group. This group consists of concerned system developers and users who have organized to synthesize recommendations for standard UNIX performance management subsystem interfaces and architectures. The purpose of these recommendations is to provide a core set of performance management functions and these functions can be used to build tools by hardware system developers, vertical application software developers, and performance application software developers.

  7. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  8. Computer-communication networks

    CERN Document Server

    Meditch, James S

    1983-01-01

    Computer- Communication Networks presents a collection of articles the focus of which is on the field of modeling, analysis, design, and performance optimization. It discusses the problem of modeling the performance of local area networks under file transfer. It addresses the design of multi-hop, mobile-user radio networks. Some of the topics covered in the book are the distributed packet switching queuing network design, some investigations on communication switching techniques in computer networks and the minimum hop flow assignment and routing subject to an average message delay constraint

  9. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  10. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  11. With Unix under the hood, the Mac has a toehold in the geophysical sector

    Energy Technology Data Exchange (ETDEWEB)

    Roche, P.

    2004-05-01

    A new lease on life in the geophysical sector is predicted for Apple Computer as result of the company's decision first to convert to the Unix System, and then to develop a new operating system called OS X (O-S-Ten) which runs under a version of Unix called Free BSD. While Apple shows no indication of interest to market its hardware and software to the oil industry, at least one oil company, Houston-based Seitel Inc., is using Apple products for its high performance computing and technical desktop applications, such as storing its onshore and offshore 3-D seismic data on Apple's Xserve RAID rack storage system. In another application, Virginia Polytechnic Institute built a supercomputer using 1,100 64-bit G5 Power Macs. The result is the third fastest super-computer in the world, running at a blazing 10.28 teraflops, or 10.28 trillion calculations per second. By all accounts, it is well suited to oilpatch tasks such as seismic data processing and running large-scale simulations such as fluid flow through porous media. Seitel is also interested in Apple's 64-bit G5 computers to run its seismic data processing operations which require large amounts of computing power. While Apple's hardware and software appear to be well adopted to perform oilpatch tasks, proliferation of use of Apple products by oilpatch companies is hindered by the almost complete absence of oilpatch software. Parallel Geoscience Corporation which had long been interested in creating Mac-based software for geoscience applications, is in the process of filling that gap by refocusing its attention on an O-S-Ten version of its Seismic Precessing Workshop{sup T}M{sup (}SPW). How long it will take to make the conversion will depend on customer demand. 2 figs.

  12. The hybrid UNIX controller for real-time data acquisition

    International Nuclear Information System (INIS)

    Huesman, R.H.; Klein, G.J.; Fleming, T.K.

    1996-01-01

    The authors describe a hybrid data acquisition architecture integrating a conventional UNIX workstation with CAMAC-based real-time hardware. The system combines the high-level programming simplicity and user interface of a UNIX workstation with the low-level timing control available from conventional real-time hardware. They detail this architecture as it has been implemented for control of the Donner 600-Crystal Positron Tomograph (PET600). Low-level data acquisition is carried out in this system using eight LeCroy 3588 histogrammers, which together after derandomization, acquire events at rates up to 4 MHz, and two dedicated Motorola 6809 microprocessors, which arbitrate fine timing control during acquisition. A SUN Microsystems UNIX workstation is used for high-level control, allowing an easily extensible user interface in an X-Windows environment, as well as real-time communications to the low-level acquisition units. Communication between the high- and low-level units is carried out via a Jorway 73A SCSI-CAMAC crate controller and a serial interface. For this application, the hybrid configuration segments low from high-level control for ease of maintenance and provided a low-cost upgrade from dated high-level control hardware

  13. The UNIX/XENIX Advantage: Applications in Libraries.

    Science.gov (United States)

    Gordon, Kelly L.

    1988-01-01

    Discusses the application of the UNIX/XENIX operating system to support administrative office automation functions--word processing, spreadsheets, database management systems, electronic mail, and communications--at the Central Michigan University Libraries. Advantages and disadvantages of the XENIX operating system and system configuration are…

  14. Classroom Computer Network.

    Science.gov (United States)

    Lent, John

    1984-01-01

    This article describes a computer network system that connects several microcomputers to a single disk drive and one copy of software. Many schools are switching to networks as a cheaper and more efficient means of computer instruction. Teachers may be faced with copywriting problems when reproducing programs. (DF)

  15. UNIX secure server : a free, secure, and functional server example

    OpenAIRE

    Sastre, Hugo

    2016-01-01

    The purpose of this thesis work was to introduce UNIX server as a personal server but also as a start point for investigation and developing at a professional level. The objective of this thesis was to build a secure server providing not only a FTP server but also an HTTP server and a cloud system for remote backups. OpenBSD was used as the operating system. OpenBSD is a UNIX-like operating system made by hackers for hackers. The difference with other systems that might partially provid...

  16. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  17. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  18. UNIX code management and distribution

    International Nuclear Information System (INIS)

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process

  19. 1993 CERN school of computing. Proceedings

    International Nuclear Information System (INIS)

    Vandoni, C.E.; Verkerk, C.

    1994-01-01

    These proceedings contain the majority of the lectures given at the 1993 CERN School of Computing. Artificial neural networks were treated with particular emphasis on applications in particle physics. A discussion of triggering for experiments at the proposed LHC machine provided a direct connection to data acquisition in this field, whereas another aspect of signal processing was seen in the description of a gravitational wave interferometer. Some of the more general aspects of data handling covered included parallel processing, te IEEE mass storage system, and the use of object stores of events. Lectures on broadband telecommunications networks and asynchronous transfer mode described some recent developments in communications. The analysis and visualization of the data were discussed in the talks on general-purpose portable software tools (PAW++, KUIP and PIAF) and on the uses of computer animation and virtual reality to this end. Lectures on open software discussed operating systems and distributed computing, and the evolution of products like Unix, NT and DCE. (orig.) e

  20. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  1. Network communication libraries for the next control system of the KEK e-/e+ Linac

    International Nuclear Information System (INIS)

    Kamikubota, Norihiko; Furukawa, Kazuro; Nakahara, Kazuo; Abe, Isamu

    1992-01-01

    The network communication libraries for the next control system of the KEK Linac have been developed. They are based on TCP/IP sockets, and show high availability among the different operating systems: UNIX, VAX/VMS, and MS-DOS. They also show high source portability of application programs among the different computer systems provided by various vendors. The performance and problems are presented in detail. (author)

  2. Yankee links computing needs, increases productivity

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    Yankee Atomic Electric Company provides design and consultation services to electric utility companies that operate nuclear power plants. This means bringing together the skills and talents of more than 500 people in many disciplines, including computer-aided design, human resources, financial services, and nuclear engineering. The company was facing a problem familiar to many companies in the nuclear industry.Key corporate data and applications resided on UNIX or other types of computer systems, but most users at Yankee had personal computers on their desks. How could Yankee enable the PC users to share the data, applications, and resources of the larger computing environment such as UNIX, while ensuring they could still use their favorite PC applications? The solution was PC-NFS from Sunsoft, of Chelmsford, Mass., which links PCs to UNIX and other systems. The Yankee computing story is an example of computer downsizing-the trend of moving away from mainframe computers in favor of lower-cost, more flexible client/server computing. Today, Yankee Atomic has more than 350 PCs on desktops throughout the company, using PC-NFS, which enables them t;o use the data, applications, disks, and printers of the FUNIX server systems. This new client/server environment has reduced Yankee's computing costs while increasing its computing power and its ability to respond to customers

  3. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  4. ADAM (Affordable Desktop Application Manager): a Unix desktop application manager

    International Nuclear Information System (INIS)

    Liebana, M.; Marquina, M.; Ramos, R.

    1996-01-01

    ADAM stands for Affordable Desktop Application Manager. It is a GUI developed at CERN with the aim to ease access to applications. The motivation to develop ADAM came from the unavailability of environments like COSE/CDE and their heavy resource consumption. ADAM has proven to be user friendly: new users are able to customize it to their needs in few minutes. Groups of users may share through ADAM a common application environment. ADAM also integrates the Unix and the PC world. PC users can excess Unix applications in the same way as their usual Windows applications. This paper describes all the ADAM features, how they are used at CERN Public Services, and the future plans for ADAM. (author)

  5. Network Restoration for Next-Generation Communication and Computing Networks

    Directory of Open Access Journals (Sweden)

    B. S. Awoyemi

    2018-01-01

    Full Text Available Network failures are undesirable but inevitable occurrences for most modern communication and computing networks. A good network design must be robust enough to handle sudden failures, maintain traffic flow, and restore failed parts of the network within a permissible time frame, at the lowest cost achievable and with as little extra complexity in the network as possible. Emerging next-generation (xG communication and computing networks such as fifth-generation networks, software-defined networks, and internet-of-things networks have promises of fast speeds, impressive data rates, and remarkable reliability. To achieve these promises, these complex and dynamic xG networks must be built with low failure possibilities, high network restoration capacity, and quick failure recovery capabilities. Hence, improved network restoration models have to be developed and incorporated in their design. In this paper, a comprehensive study on network restoration mechanisms that are being developed for addressing network failures in current and emerging xG networks is carried out. Open-ended problems are identified, while invaluable ideas for better adaptation of network restoration to evolving xG communication and computing paradigms are discussed.

  6. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  7. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  8. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  9. Basics of Computer Networking

    CERN Document Server

    Robertazzi, Thomas

    2012-01-01

    Springer Brief Basics of Computer Networking provides a non-mathematical introduction to the world of networks. This book covers both technology for wired and wireless networks. Coverage includes transmission media, local area networks, wide area networks, and network security. Written in a very accessible style for the interested layman by the author of a widely used textbook with many years of experience explaining concepts to the beginner.

  10. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  11. From a UNIX to a PC Based SCADA System

    CERN Document Server

    Momal, F

    1999-01-01

    In order to facilitate the development of supervisory applications involved in slow process control (such as cryogenic control), the LHC/IAS Group (Equipment Controls Group) opted, a few years ago, for an industrial SCADA package which runs on UNIXÒ platforms. However, to reduce costs and following market trends, it has been decided to move over to a PC-based package. Several processes relating to the testing of the prototypes of the LHC magnets are already controlled in this way. However, it was still necessary to provide all the services previously available to the users, for example, data archiving in central databases, real-time access through the Web, automatic GSM calls, etc. This paper presents the advantages and drawbacks of a PC-based package versus a Unix-based system. It also lists the criteria used in the market survey to arrive at the final selection, as well as, the overall architecture, highlighting the developments needed to integrate the package into the global computing environment.

  12. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  13. INTERRUPTION TO COMPUTING SERVICES, SATURDAY 9 FEBRUARY

    CERN Multimedia

    2002-01-01

    In order to allow the rerouting of electrical cables which power most of the B513 Computer Room, there will be a complete shutdown of central computing services on Saturday 9 February. This shutdown affects all Central Computing services as well as the general purpose site wide network (137.138), and hence desktop connectivity will also be cut. In order for the physical intervention to start at 07:30, services will be run down as follows (which has been planned to allow the completion of the Friday night backups of AFS and Oracle): Friday 08/02 17:00 Scheduling of batch jobs suspended 18:00 HPSS services shutdown Saturday 09/02 01:00 Any batch jobs that have not ended will be terminated 04:00 Rundown of general purpose services including Mail, Web, NICE, Unix interactive services, ACB, CASTOR, Tape Service and finally Oracle 06:30 AIS services shutdown (including EDH, BHT and HRT) 07:00 AFS service shutdown 07:30 End of guaranteed service on the general purpose site-wide network (137.138) Services will be res...

  14. Computer Networking Laboratory for Undergraduate Computer Technology Program

    National Research Council Canada - National Science Library

    Naghedolfeizi, Masoud

    2000-01-01

    ...) To improve the quality of education in the existing courses related to computer networks and data communications as well as other computer science courses such programming languages and computer...

  15. Development of a new discharge control system utilizing UNIX workstations and VME-bus systems for JT-60

    Energy Technology Data Exchange (ETDEWEB)

    Akasaka, Hiromi; Sueoka, Michiharu; Takano, Shoji; Totsuka, Toshiyuki; Yonekawa, Izuru; Kurihara, Kenichi; Kimura, Toyoaki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The JT-60 discharge control system, which had used HIDIC-80E 16 bit mini-computers and CAMAC systems since the start of JT-60 experiment in 1985, was renewed in March, 2001. The new system consists of a UNIX workstation and a VME-bus system, and features a distributed control system. The workstation performs message communication with a VME-bus system and controllers of JT-60 sub-systems and processing for discharge control because of its flexibility to construction of a new network and modifications of software. The VME-bus system performs discharge sequence control because it is suitable for fast real time control and flexible to the hardware extension. The replacement has improved the control function and reliability of the discharge control system and also has provided sufficient performance necessary for future modifications of JT-60. The new system has been running successfully since April 2001. The data acquisition speed was confirmed to be twice faster than the previous one. This report describes major functions of the discharge control system, technical ideas for developing the system and results of the initial operation in detail. (author)

  16. EGSNRC distributed systems on commercial network

    International Nuclear Information System (INIS)

    McCormack, J.M.

    2001-01-01

    Full text: EGSnrc is a Monte Carlo based simulation program for determining radiation dose distribution within a body. Computational times are large as each individual photon path must be calculated and every energy absorption event stored. This means that EGSnrc lends itself to distributed processing, as each photon is independent of the next, and code is included within the package to enable this. EGSnrc is currently only supported on Unix based computer systems, whilst the department has ∼45 Pentium II and III class workstations all operating under Windows NT within a Novell network. This investigation demonstrates the capability of a windows based system to perform distributed computation of EGSnrc. All Unix scripts were modified to work as one single Windows NT batch file. The source code was then compiled using the gcc C compiler (a Windows NT version of the Unix compiler) without modification of the underlying source code. A small Visual Basic program was used as a trigger to start the simulation as a Windows NT service, with Novell Z.E.N. Works to distribute the trigger code to each system. When a trigger was received, the computer began a simulation as a low priority task in such a way that the user did not see anything on the screen, and so the simulation did not slow down the general running of the computer. The results were then transferred to the network, and collated on a central computer. As an unattended system, a calculation can start within 15 minutes of any desired time, calculate the desired results, and return the results for collation. This demonstrated effectively a distributed Windows NT TM EGSnrc system. Simulations must be chosen carefully to ensure that each photon can be considered independent, as photon histories do not get distributed. Each system that was used for EGSnrc was required to be capable of running the full EGSnrc simulation on its own EGSnrc stored the entire result array locally, so a large, high-resolution body required

  17. A UNIX device driver for a Translink II Transputer board

    International Nuclear Information System (INIS)

    Wiley, J.C.

    1991-01-01

    A UNIX device driver for a TransLink II Transputer board is described. A complete listing of the code is presented. The device driver allows a transputer array to be used with the A/UX operating system

  18. An Introduction to Parallel Cluster Computing Using PVM for Computer Modeling and Simulation of Engineering Problems

    International Nuclear Information System (INIS)

    Spencer, VN

    2001-01-01

    An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran

  19. Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking

    International Nuclear Information System (INIS)

    Bemis, C.; Erskine, J.; Franey, M.; Greiner, D.; Hoehn, M.; Kaletka, M.; LeVine, M.; Roberson, R.; Welch, L.

    1990-05-01

    This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office

  20. Computer Network Security- The Challenges of Securing a Computer Network

    Science.gov (United States)

    Scotti, Vincent, Jr.

    2011-01-01

    This article is intended to give the reader an overall perspective on what it takes to design, implement, enforce and secure a computer network in the federal and corporate world to insure the confidentiality, integrity and availability of information. While we will be giving you an overview of network design and security, this article will concentrate on the technology and human factors of securing a network and the challenges faced by those doing so. It will cover the large number of policies and the limits of technology and physical efforts to enforce such policies.

  1. Computer Networks as a New Data Base.

    Science.gov (United States)

    Beals, Diane E.

    1992-01-01

    Discusses the use of communication on computer networks as a data source for psychological, social, and linguistic research. Differences between computer-mediated communication and face-to-face communication are described, the Beginning Teacher Computer Network is discussed, and examples of network conversations are appended. (28 references) (LRW)

  2. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  3. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  4. Characteristics of the TRISTAN control computer network

    International Nuclear Information System (INIS)

    Kurokawa, Shinichi; Akiyama, Atsuyoshi; Katoh, Tadahiko; Kikutani, Eiji; Koiso, Haruyo; Oide, Katsunobu; Shinomoto, Manabu; Kurihara, Michio; Abe, Kenichi

    1986-01-01

    Twenty-four minicomputers forming an N-to-N token-ring network control the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on NODAL, a multicomputer interpretive language developed at the CERN SPS. The high-level services offered to the users of the network are remote execution by the EXEC, EXEC-P and IMEX commands of NODAL and uniform file access throughout the system. The network software was designed to achieve the fast response of the EXEC command. The performance of the network is also reported. Tasks that overload the minicomputers are processed on the KEK central computers. One minicomputer in the network serves as a gateway to KEKNET, which connects the minicomputer network and the central computers. The communication with the central computers is managed within the framework of the KEK NODAL system. NODAL programs communicate with the central computers calling NODAL functions; functions for exchanging data between a data set on the central computers and a NODAL variable, submitting a batch job to the central computers, checking the status of the submitted job, etc. are prepared. (orig.)

  5. Spontaneous ad hoc mobile cloud computing network.

    Science.gov (United States)

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  6. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  7. Offline computing and networking

    International Nuclear Information System (INIS)

    Appel, J.A.; Avery, P.; Chartrand, G.

    1985-01-01

    This note summarizes the work of the Offline Computing and Networking Group. The report is divided into two sections; the first deals with the computing and networking requirements and the second with the proposed way to satisfy those requirements. In considering the requirements, we have considered two types of computing problems. The first is CPU-intensive activity such as production data analysis (reducing raw data to DST), production Monte Carlo, or engineering calculations. The second is physicist-intensive computing such as program development, hardware design, physics analysis, and detector studies. For both types of computing, we examine a variety of issues. These included a set of quantitative questions: how much CPU power (for turn-around and for through-put), how much memory, mass-storage, bandwidth, and so on. There are also very important qualitative issues: what features must be provided by the operating system, what tools are needed for program design, code management, database management, and for graphics

  8. Computing farms

    International Nuclear Information System (INIS)

    Yeh, G.P.

    2000-01-01

    High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing

  9. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  10. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  11. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  12. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  13. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory

  14. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  15. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  16. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  17. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  18. Understanding and designing computer networks

    CERN Document Server

    King, Graham

    1995-01-01

    Understanding and Designing Computer Networks considers the ubiquitous nature of data networks, with particular reference to internetworking and the efficient management of all aspects of networked integrated data systems. In addition it looks at the next phase of networking developments; efficiency and security are covered in the sections dealing with data compression and data encryption; and future examples of network operations, such as network parallelism, are introduced.A comprehensive case study is used throughout the text to apply and illustrate new techniques and concepts as th

  19. Using satellite communications for a mobile computer network

    Science.gov (United States)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  20. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  1. Computer network for electric power control systems. Chubu denryoku (kabu) denryoku keito seigyoyo computer network

    Energy Technology Data Exchange (ETDEWEB)

    Tsuneizumi, T. (Chubu Electric Power Co. Inc., Nagoya (Japan)); Shimomura, S.; Miyamura, N. (Fuji Electric Co. Ltd., Tokyo (Japan))

    1992-06-03

    A computer network for electric power control system was developed that is applied with the open systems interconnection (OSI), an international standard for communications protocol. In structuring the OSI network, a direct session layer was accessed from the operation functions when high-speed small-capacity information is transmitted. File transfer, access and control having a function of collectively transferring large-capacity data were applied when low-speed large-capacity information is transmitted. A verification test for the realtime computer network (RCN) mounting regulation was conducted according to a verification model using a mini-computer, and a result that can satisfy practical performance was obtained. For application interface, kernel, health check and two-route transmission functions were provided as a connection control function, so were transmission verification function and late arrival abolishing function. In system mounting pattern, dualized communication server (CS) structure was adopted. A hardware structure may include a system to have the CS function contained in a host computer and a separate installation system. 5 figs., 6 tabs.

  2. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  3. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  4. Implementation of the ALEPH detector simulation code using UNIX with on-line graphics display

    International Nuclear Information System (INIS)

    Corden, M.J.; Georgiopoulos, C.H.; Mermikides, M.E.; Streets, J.

    1989-01-01

    GALEPH, the detector simulation program of the ALEPH detector was ported to an ETA10 running under ATandT UNIX System 5. The program on the ETA10 can be driven using standard UNIX socket connections between the ETA and a Silicon Graphics Iris-3020 workstation. The simulated data on the ETA are transferred, using the machine independent binary format EPIO, and displayed on the workstation using a locally developed software package for the visualization of the ALEPH detector. The client (Iris-3020) can also pass parameters to the server (ETA10) and thus interactively change the type of events produced using the same socket connection. (orig.)

  5. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han; Yang, Yong Liang; Bao, Fan; Fink, Daniel; Yan, Dongming; Wonka, Peter; Mitra, Niloy J.

    2016-01-01

    of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications

  6. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  7. Autonomic computing enabled cooperative networked design

    CERN Document Server

    Wodczak, Michal

    2014-01-01

    This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

  8. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  9. Distributed computing for FTU data handling

    Energy Technology Data Exchange (ETDEWEB)

    Bertocchi, A. E-mail: bertocchi@frascati.enea.it; Bracco, G.; Buceti, G.; Centioli, C.; Giovannozzi, E.; Iannone, F.; Panella, M.; Vitale, V

    2002-06-01

    The growth of data warehouse in tokamak experiment is leading fusion laboratories to provide new IT solutions in data handling. In the last three years, the Frascati Tokamak Upgrade (FTU) experimental database was migrated from IBM-mainframe to Unix distributed computing environment. The migration efforts have taken into account the following items: (1) a new data storage solution based on storage area network over fibre channel; (2) andrew file system (AFS) for wide area network file sharing; (3) 'one measure/one file' philosophy replacing 'one shot/one file' to provide a faster read/write data access; (4) more powerful services, such as AFS, CORBA and MDSplus to allow users to access FTU database from different clients, regardless their O.S.; (5) large availability of data analysis tools, from the locally developed utility SHOW to the multi-platform Matlab, interactive data language and jScope (all these tools are now able to access also the Joint European Torus data, in the framework of the remote data access activity); (6) a batch-computing cluster of Alpha/CompaqTru64 CPU based on CODINE/GRD to optimize utilization of software and hardware resources.

  10. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  11. The research of computer network security and protection strategy

    Science.gov (United States)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  12. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  13. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  14. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    Science.gov (United States)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  15. 2013 International Conference on Computer Engineering and Network

    CERN Document Server

    Zhu, Tingshao

    2014-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from The Proceedings of the 2013 International Conference on Computer Engineering and Network (CENet2013) which was held on July 20-21, in Shanghai, China.

  16. Hyperswitch Network For Hypercube Computer

    Science.gov (United States)

    Chow, Edward; Madan, Herbert; Peterson, John

    1989-01-01

    Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.

  17. 4th International Conference on Computer Engineering and Networks

    CERN Document Server

    2015-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from the 4th International Conference on Computer Engineering and Networks (CENet2014) held July 19-20, 2014 in Shanghai, China.  ·       Covers emerging topics for computer engineering and networking ·       Discusses how to improve productivity by using the latest advanced technologies ·       Examines innovation in the fields of computer engineering and networking  

  18. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  19. Survey on utilization of database for research and development of global environmental industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsu no tame no database nado no riyo ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    To optimize networks and database systems for promotion of the industry technology development contributing to the solution of the global environmental problem, studies are made on reusable information resource and its utilization methods. As reusable information resource, there are external database and network system for researchers` information exchange and for computer use. The external database includes commercial database and academic database. As commercial database, 6 agents and 13 service systems are selected. As academic database, there are NACSIS-IR and the database which is connected with INTERNET in the U.S. These are used in connection with the UNIX academic research network called INTERNET. For connection with INTERNET, a commercial UNIX network service called IIJ which starts service in April 1993 can be used. However, personal computer communication network is used for the time being. 6 figs., 4 tabs.

  20. FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)

    Science.gov (United States)

    Pack, G.

    1994-01-01

    saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math

  1. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  2. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  3. Computing chemical organizations in biological networks.

    Science.gov (United States)

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  4. Development of the computer network of IFIN-HH

    International Nuclear Information System (INIS)

    Danet, A.; Mirica, M.; Constantinescu, S.

    1998-01-01

    The general computer network of Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH), as part of RNC (Romanian National Computer Network for scientific research and technological development), offers the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNC is the national project co-ordinated and established by the Ministry of Research and Technology targeted on the following main objectives: - setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - providing a rapid and competitive tool for the exchange information in the framework of R-D community; - using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - providing a support for information, documentation, scientific and technical co-operation. The guiding principle in elaborating the project of general computer network of IFIN-HH was to implement an open system based on OSI standards without technical barriers in communication between different communities using different computing hardware and software. The major objectives achieved in 1997 in the direction of developing the general computer network of IFIN-HH (over 250 computers connected) were: - connecting all the existing and newly installed computer equipment and providing an adequate connectivity; - providing the usual Internet services: e-mail, ftp, telnet, finger, gopher; - providing access to the World Wide Web resources; - providing on-line statistics of IP traffic (input and output) of each node of the domain computer network; - improving the performance of the connection with the central node RNC. (authors)

  5. Computers are stepping stones to improved imaging.

    Science.gov (United States)

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  6. Principles of network and system administration

    CERN Document Server

    Burgess, Mark

    2002-01-01

    A practical guide for meeting the challenges of planning and designing a networkNetwork design has to be logical and efficient, decisions have to be made about what services are needed, and security concerns must be addressed. Focusing on general principles, this book will help make the process of setting up, configuring, and maintaining a network much easier. It outlines proven procedures for working in a global community of networked machines, and provides practical illustrations of technical specifics. Readers will also find broad coverage of Linux and other Unix versions, Windows(r), Macs,

  7. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  8. SLAC B Factory computing

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1992-02-01

    As part of the research and development program in preparation for a possible B Factory at SLAC, a group has been studying various aspects of HEP computing. In particular, the group is investigating the use of UNIX for all computing, from data acquisition, through analysis, and word processing. A summary of some of the results of this study will be given, along with some personal opinions on these topics

  9. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  10. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  11. Proceedings of workshop on distributed computing and network

    International Nuclear Information System (INIS)

    Abe, F.; Yuasa, F.

    1993-02-01

    'Distributed Computing and Network' is one of hot topics in the field of computing. Recent progress in the computer technology is providing new paradigm for computing even in High Energy Physics. Particularly the workstation based computer system is opening new active field of computer application to sciences. The major topics discussed in this symposium are distributed computing and wide area research network for domestic and international link. The two days symposium provided so enough topics to foresee the next direction of our computing environment. 70 people have got together to discuss on these interesting thema as well as information exchange on the computer technologies. (J.P.N.)

  12. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX; Traspaso de ficheros FORTRAN de datos de VAX/VMS a ALPHA/UNIX

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Milligen, B. Ph van [CIEMAT (Spain)

    1997-09-01

    Several tools have been developed to access the TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE, CAMAC and FORTRAN unformatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN unformatted files defined herein, from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author)

  13. Access to CAMAC from VxWorks and UNIX in DART

    International Nuclear Information System (INIS)

    Streets, J.; Meadows, J.; Moore, C.

    1995-05-01

    As part of the DART Project the authors have developed a package of software for CAMAC access from UNIX and VxWorks platforms, with support for several hardware interfaces. They report on developments for the CES CBD8210 VME to parallel CAMAC, the Hytec VSD2992 VME to serial CAMAC and Jorway 411S SCSI to parallel and serial CAMAC branch drivers, and give a summary of the timings obtained

  14. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  15. Analysis of Computer Network Information Based on "Big Data"

    Science.gov (United States)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  16. Email networks and the spread of computer viruses

    Science.gov (United States)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  17. Networking DEC and IBM computers

    Science.gov (United States)

    Mish, W. H.

    1983-01-01

    Local Area Networking of DEC and IBM computers within the structure of the ISO-OSI Seven Layer Reference Model at a raw signaling speed of 1 Mops or greater are discussed. After an introduction to the ISO-OSI Reference Model nd the IEEE-802 Draft Standard for Local Area Networks (LANs), there follows a detailed discussion and comparison of the products available from a variety of manufactures to perform this networking task. A summary of these products is presented in a table.

  18. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    International Nuclear Information System (INIS)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL's systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF

  19. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    Science.gov (United States)

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  20. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  1. Computing with networks of nonlinear mechanical oscillators.

    Directory of Open Access Journals (Sweden)

    Jean C Coulombe

    Full Text Available As it is getting increasingly difficult to achieve gains in the density and power efficiency of microelectronic computing devices because of lithographic techniques reaching fundamental physical limits, new approaches are required to maximize the benefits of distributed sensors, micro-robots or smart materials. Biologically-inspired devices, such as artificial neural networks, can process information with a high level of parallelism to efficiently solve difficult problems, even when implemented using conventional microelectronic technologies. We describe a mechanical device, which operates in a manner similar to artificial neural networks, to solve efficiently two difficult benchmark problems (computing the parity of a bit stream, and classifying spoken words. The device consists in a network of masses coupled by linear springs and attached to a substrate by non-linear springs, thus forming a network of anharmonic oscillators. As the masses can directly couple to forces applied on the device, this approach combines sensing and computing functions in a single power-efficient device with compact dimensions.

  2. Computer Networks and Globalization

    Directory of Open Access Journals (Sweden)

    J. Magliaro

    2007-07-01

    Full Text Available Communication and information computer networks connect the world in ways that make globalization more natural and inequity more subtle. As educators, we look at these phenomena holistically analyzing them from the realist’s view, thus exploring tensions, (in equity and (injustice, and from the idealist’s view, thus embracing connectivity, convergence and development of a collective consciousness. In an increasingly market- driven world we find examples of openness and human generosity that are based on networks, specifically the Internet. After addressing open movements in publishing, software industry and education, we describe the possibility of a dialectic equilibrium between globalization and indigenousness in view of ecologically designed future smart networks

  3. Computer networks and their implications for nuclear data

    International Nuclear Information System (INIS)

    Carlson, J.

    1992-01-01

    Computer networks represent a valuable resource for accessing information. Just as the computer has revolutionized the ability to process and analyze information, networks have and will continue to revolutionize data collection and access. A number of services are in routine use that would not be possible without the presence of an (inter)national computer network (which will be referred to as the internet). Services such as electronic mail, remote terminal access, and network file transfers are almost a required part of any large scientific/research organization. These services only represent a small fraction of the potential uses of the internet; however, the remainder of this paper discusses some of these uses and some technological developments that may influence these uses

  4. 37 CFR 1.96 - Submission of computer program listings.

    Science.gov (United States)

    2010-07-01

    ... specifications (no other format shall be allowed): (i) Computer Compatibility: IBM PC/XT/AT, or compatibles, or Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line...

  5. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  6. A Network SIG is born, DECUS (Switzerland) Newsletter, May 1990

    CERN Document Server

    Heagerty, Denise

    1990-01-01

    This article announces the formation of a Swiss DECUS (DEC Users Group) Network SIG in May 1990. The goal of this SIG is to help Swiss DECnet managers to plan transition from their proprietary DECnet Phase IV networks (e.g. the HEP/SPAN DECnet) to open networks based on DECnet Phase V/OSI. The SIG also proposes to address integration with UNIX based workstations using the Internet's TCP/IP protocols.

  7. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  8. Selection and implementation of a laboratory computer system.

    Science.gov (United States)

    Moritz, V A; McMaster, R; Dillon, T; Mayall, B

    1995-07-01

    The process of selection of a pathology computer system has become increasingly complex as there are an increasing number of facilities that must be provided and stringent performance requirements under heavy computing loads from both human users and machine inputs. Furthermore, the continuing advances in software and hardware technology provide more options and innovative new ways of tackling problems. These factors taken together pose a difficult and complex set of decisions and choices for the system analyst and designer. The selection process followed by the Microbiology Department at Heidelberg Repatriation Hospital included examination of existing systems, development of a functional specification followed by a formal tender process. The successful tenderer was then selected using predefined evaluation criteria. The successful tenderer was a software development company that developed and supplied a system based on a distributed network using a SUN computer as the main processor. The software was written using Informix running on the UNIX operating system. This represents one of the first microbiology systems developed using a commercial relational database and fourth generation language. The advantages of this approach are discussed.

  9. Computing motion using resistive networks

    Science.gov (United States)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  10. Computer systems and networks status and perspectives

    CERN Document Server

    Zacharov, V

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor technology are reviewed and the associated peripheral components are discussed in the light of the prevailing rapid pace of developments. Particular emphasis is given to the impact of very large scale integrated circuitry in these developments. Computer networks are considered in some detail, including common-carrier and local-area networks, and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated.

  11. Computer systems and networks: Status and perspectives

    International Nuclear Information System (INIS)

    Zacharov, Z.

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor thechnology are reviewed and the associated peripheral components are disscussed in the light of the prevailling rapid pace of developments. Particular emphais is given to the impact of very large scale integrated circuitry in these developments. Computer networks, and considered in some detail, including comon-carrier and local-area networks and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated. (orig.)

  12. Social networks a framework of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2014-01-01

    This volume provides the audience with an updated, in-depth and highly coherent material on the conceptually appealing and practically sound information technology of Computational Intelligence applied to the analysis, synthesis and evaluation of social networks. The volume involves studies devoted to key issues of social networks including community structure detection in networks, online social networks, knowledge growth and evaluation, and diversity of collaboration mechanisms.  The book engages a wealth of methods of Computational Intelligence along with well-known techniques of linear programming, Formal Concept Analysis, machine learning, and agent modeling.  Human-centricity is of paramount relevance and this facet manifests in many ways including personalized semantics, trust metric, and personal knowledge management; just to highlight a few of these aspects. The contributors to this volume report on various essential applications including cyber attacks detection, building enterprise social network...

  13. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  14. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  15. Spin networks and quantum computation

    International Nuclear Information System (INIS)

    Kauffman, L.; Lomonaco, S. Jr.

    2008-01-01

    We review the q-deformed spin network approach to Topological Quantum Field Theory and apply these methods to produce unitary representations of the braid groups that are dense in the unitary groups. The simplest case of these models is the Fibonacci model, itself universal for quantum computation. We here formulate these braid group representations in a form suitable for computation and algebraic work. (authors)

  16. Management of seismic data on network

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bu Heung [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    KIGAM has managed magnetic tapes written in seismic data acquired in Korea offshore and abroad since 1979. For now, it amounts about 13,000 tapes and other documents of seismic data are reserved by KIGAM also. For handling with them, FOX-PRO database management system has been used since 1993. In case of one user, it seems useful and convenient because the program is very easy to use and many well done utility was provided. In contrast with that, it has many problems also. For example, a user who wants to query information of these magnetic tapes must go magnetic tape room where the system is installed and he must know how to use the utilities of the FOX-PRO database management system. For the reason of above, the seismic data processing team attempted to change the FOX-PRO system with other client-server system supports networking on internet. After many testing and considering, they selected like as following hardware and software( System: PC with networking, OS: Linux and Unix, Software: Just Logic/SQL). The main reasons for selecting above system, first, any kinds of personal computer are available and easy to get. Secondly, Linux and Unix OS are good for using network. Especially, Linux is free and easy to get on many internet ftp sites. Lastly Just Logic/SQL is for client-server system, supports Linux OS and the programming style is very similar to C language. The contents of this report are as follows. In chapter 2, the Just Logic/SQL system structure and existing files through the sub-directories are showed and commented. In chapter 3, the statements using in Just Logic/SQL are explained and some examples are showed. In chapter 4, shows two example programs making seismic database including rack list, optical disk table respectively. The rack list table is the database of magnetic tapes managed by KIGAM. The optical disk table is the information record about how many, what tapes are converted to optical disk. (author). 4 tabs.

  17. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    Backend Distributed Datastore High-­‐level   Objec.ve   Network   Policy   Perf.   Metrics   SNAT  IP   Alloca.on   Controller...azure.microsoft.com/. 114 [16] Microsoft Azure ExpressRoute. http://azure.microsoft.com/en-us/ services/expressroute/. [17] Mobility and Networking...Networking Technologies, Services, and Protocols; Performance of Computer and Commu- nication Networks; Mobile and Wireless Communications Systems

  18. Discussion on the Technology and Method of Computer Network Security Management

    Science.gov (United States)

    Zhou, Jianlei

    2017-09-01

    With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.

  19. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  20. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  1. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  2. Line-plane broadcasting in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  3. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  4. Code 672 observational science branch computer networks

    Science.gov (United States)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  5. Network and computing infrastructure for scientific applications in Georgia

    Science.gov (United States)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  6. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  7. Software network analyzer for computer network performance measurement planning over heterogeneous services in higher educational institutes

    OpenAIRE

    Ismail, Mohd Nazri

    2009-01-01

    In 21st century, convergences of technologies and services in heterogeneous environment have contributed multi-traffic. This scenario will affect computer network on learning system in higher educational Institutes. Implementation of various services can produce different types of content and quality. Higher educational institutes should have a good computer network infrastructure to support usage of various services. The ability of computer network should consist of i) higher bandwidth; ii) ...

  8. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han

    2016-07-11

    Connectivity and layout of underlying networks largely determine agent behavior and usage in many environments. For example, transportation networks determine the flow of traffic in a neighborhood, whereas building floorplans determine the flow of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications. Such specifications can be in the form of network density, travel time versus network length, traffic type, destination location, etc. We propose an integer programming-based approach that guarantees that the resultant networks are valid by fulfilling all the specified hard constraints and that they score favorably in terms of the objective function. We evaluate our algorithm in two different design settings, street layout and floorplans to demonstrate that diverse networks can emerge purely from high-level functional specifications.

  9. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  10. The ALICE Magnetic System Computation.

    CERN Document Server

    Klempt, W; CERN. Geneva; Swoboda, Detlef

    1995-01-01

    In this note we present the first results from the ALICE magnetic system computation performed in the 3-dimensional way with the Vector Fields TOSCA code (version 6.5) [1]. To make the calculations we have used the IBM RISC System 6000-370 and 6000-550 machines combined in the CERN PaRC UNIX cluster.

  11. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  12. An introduction to computer networks

    CERN Document Server

    Rizvi, SAM

    2011-01-01

    AN INTRODUCTION TO COMPUTER NETWORKS is a comprehensive text book which is focused and designed to elaborate the technical contents in the light of TCP/IP reference model exploring both digital and analog data communication. Various communication protocols of different layers are discussed along with their pseudo-code. This book covers the detailed and practical information about the network layer alongwith information about IP including IPV6, OSPF, and internet multicasting. It also covers TCP congestion control and emphasizes on the basic principles of fundamental importance concerning the technology and architecture and provides detailed discussion of leading edge topics of data communication, LAN & Network Layer.

  13. Interfacing ANSYS to user's programs using UNIX shell program

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Yong; Kim, Beom Shig [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-01-01

    It has been considered to be impossible to interface the ANSYS, which is the commercial finite element code and whose program is not open to public, to the other user's program. When the analysis need to be iterated, the user should wait until the analysis is finished and read the ANSYS result to make the input data for every iteration. In this report the direct interfacing techniques between the ANSYS and the other program using UNIX shell programming are proposed. The detail program lists and the application example are also provided. (Author) 19 refs., 6 figs., 7 tabs.

  14. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    Science.gov (United States)

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  15. SPLAI: Computational Finite Element Model for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ruzana Ishak

    2006-01-01

    Full Text Available Wireless sensor network refers to a group of sensors, linked by a wireless medium to perform distributed sensing task. The primary interest is their capability in monitoring the physical environment through the deployment of numerous tiny, intelligent, wireless networked sensor nodes. Our interest consists of a sensor network, which includes a few specialized nodes called processing elements that can perform some limited computational capabilities. In this paper, we propose a model called SPLAI that allows the network to compute a finite element problem where the processing elements are modeled as the nodes in the linear triangular approximation problem. Our model also considers the case of some failures of the sensors. A simulation model to visualize this network has been developed using C++ on the Windows environment.

  16. Choice Of Computer Networking Cables And Their Effect On Data ...

    African Journals Online (AJOL)

    Computer networking is the order of the day in this Information and Communication Technology (ICT) age. Although a network can be through a wireless device most local connections are done using cables. There are three main computer-networking cables namely coaxial cable, unshielded twisted pair cable and the optic ...

  17. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  18. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  19. Fair Secure Computation with Reputation Assumptions in the Mobile Social Networks

    Directory of Open Access Journals (Sweden)

    Yilei Wang

    2015-01-01

    Full Text Available With the rapid development of mobile devices and wireless technologies, mobile social networks become increasingly available. People can implement many applications on the basis of mobile social networks. Secure computation, like exchanging information and file sharing, is one of such applications. Fairness in secure computation, which means that either all parties implement the application or none of them does, is deemed as an impossible task in traditional secure computation without mobile social networks. Here we regard the applications in mobile social networks as specific functions and stress on the achievement of fairness on these functions within mobile social networks in the presence of two rational parties. Rational parties value their utilities when they participate in secure computation protocol in mobile social networks. Therefore, we introduce reputation derived from mobile social networks into the utility definition such that rational parties have incentives to implement the applications for a higher utility. To the best of our knowledge, the protocol is the first fair secure computation in mobile social networks. Furthermore, it finishes within constant rounds and allows both parties to know the terminal round.

  20. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  1. Computer programming and architecture the VAX

    CERN Document Server

    Levy, Henry

    2014-01-01

    Takes a unique systems approach to programming and architecture of the VAXUsing the VAX as a detailed example, the first half of this book offers a complete course in assembly language programming. The second describes higher-level systems issues in computer architecture. Highlights include the VAX assembler and debugger, other modern architectures such as RISCs, multiprocessing and parallel computing, microprogramming, caches and translation buffers, and an appendix on the Berkeley UNIX assembler.

  2. LightKone Project: Lightweight Computation for Networks at the Edge

    OpenAIRE

    Van Roy, Peter; TEKK Tour Digital Wallonia

    2017-01-01

    LightKone combines two recent advances in distributed computing to enable general-purpose computing on edge networks: * Synchronization-free programming: Large-scale applications can run efficiently on edge networks by using convergent data structures (based on Lasp and Antidote from previous project SyncFree) → tolerates dynamicity and loose coupling of edge networks * Hybrid gossip: Communication can be made highly resilient on edge networks by combining gossip with classical distributed al...

  3. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  4. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  5. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  6. Multiple network alignment on quantum computers

    Science.gov (United States)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-12-01

    Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given graphs as input, alignment algorithms use topological information to assign a similarity score to each -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  7. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX

    International Nuclear Information System (INIS)

    Sanchez, E.; Milligen, B.Ph. van

    1997-01-01

    Several tools have been developed to access the TJ-I and TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE. CAMAC and FORTRAN un formatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN un formatted files defined herein. from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author) 5 refs

  8. An Evaluation of Windows-Based Computer Forensics Application Software Running on a Macintosh

    OpenAIRE

    Gregory H. Carlton

    2008-01-01

    The two most common computer forensics applications perform exclusively on Microsoft Windows Operating Systems, yet contemporary computer forensics examinations frequently encounter one or more of the three most common operating system environments, namely Windows, OS-X, or some form of UNIX or Linux. Additionally, government and private computer forensics laboratories frequently encounter budget constraints that limit their access to computer hardware. Currently, Macintosh computer systems a...

  9. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  10. Local computer network of the JINR Neutron Physics Laboratory

    International Nuclear Information System (INIS)

    Alfimenkov, A.V.; Vagov, V.A.; Vajdkhadze, F.

    1988-01-01

    New high-speed local computer network, where intelligent network adapter (NA) is used as hardware base, is developed in the JINR Neutron Physics Laboratory to increase operation efficiency and data transfer rate. NA consists of computer bus interface, cable former, microcomputer segment designed for both program realization of channel-level protocol and organization of bidirectional transfer of information through direct access channel between monochannel and computer memory with or witout buffering in NA operation memory device

  11. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    International Nuclear Information System (INIS)

    VanderLaan, J.F.; Cummings, J.W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPUs. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT ampersand T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates

  12. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    Science.gov (United States)

    Vanderlaan, J. F.; Cummings, J. W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.

  13. Genetic networks and soft computing.

    Science.gov (United States)

    Mitra, Sushmita; Das, Ranajit; Hayashi, Yoichi

    2011-01-01

    The analysis of gene regulatory networks provides enormous information on various fundamental cellular processes involving growth, development, hormone secretion, and cellular communication. Their extraction from available gene expression profiles is a challenging problem. Such reverse engineering of genetic networks offers insight into cellular activity toward prediction of adverse effects of new drugs or possible identification of new drug targets. Tasks such as classification, clustering, and feature selection enable efficient mining of knowledge about gene interactions in the form of networks. It is known that biological data is prone to different kinds of noise and ambiguity. Soft computing tools, such as fuzzy sets, evolutionary strategies, and neurocomputing, have been found to be helpful in providing low-cost, acceptable solutions in the presence of various types of uncertainties. In this paper, we survey the role of these soft methodologies and their hybridizations, for the purpose of generating genetic networks.

  14. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  15. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  16. Quantum Random Networks for Type 2 Quantum Computers

    National Research Council Canada - National Science Library

    Allara, David L; Hasslacher, Brosl

    2006-01-01

    Random boolean networks (RBNs) have been studied theoretically and computationally in order to be able to use their remarkable self-healing and large basins of altercation properties as quantum computing architectures, especially...

  17. Computing Tutte polynomials of contact networks in classrooms

    Science.gov (United States)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  18. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  19. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  20. Advances in the operation of the DIII-D neutral beam computer systems

    International Nuclear Information System (INIS)

    Phillips, J.C.; Busath, J.L.; Penaflor, B.G.; Piglowski, D.; Kellman, D.H.; Chiu, H.K.; Hong, R.M.

    1998-02-01

    The DIII-D neutral beam system routinely provides up to 20 MW of deuterium neutral beam heating in support of experiments on the DIII-D tokamak, and is a critical part of the DIII-D physics experimental program. The four computer systems previously used to control neutral beam operation and data acquisition were designed and implemented in the late 1970's and used on DIII and DIII-D from 1981--1996. By comparison to modern standards, they had become expensive to maintain, slow and cumbersome, making it difficult to implement improvements. Most critical of all, they were not networked computers. During the 1997 experimental campaign, these systems were replaced with new Unix compliant hardware and, for the most part, commercially available software. This paper describes operational experience with the new neutral beam computer systems, and new advances made possible by using features not previously available. These include retention and access to historical data, an asynchronously fired ''rules'' base, and a relatively straightforward programming interface. Methods and principles for extending the availability of data beyond the scope of the operator consoles will be discussed

  1. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  2. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  3. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  4. The University of Michigan's Computer-Aided Engineering Network.

    Science.gov (United States)

    Atkins, D. E.; Olsen, Leslie A.

    1986-01-01

    Presents an overview of the Computer-Aided Engineering Network (CAEN) of the University of Michigan. Describes its arrangement of workstations, communication networks, and servers. Outlines the factors considered in hardware and software decision making. Reviews the program's impact on students. (ML)

  5. Digital optical interconnects for photonic computing

    Science.gov (United States)

    Guilfoyle, Peter S.; Stone, Richard V.; Zeise, Frederick F.

    1994-05-01

    A 32-bit digital optical computer (DOC II) has been implemented in hardware utilizing 8,192 free-space optical interconnects. The architecture exploits parallel interconnect technology by implementing microcode at the primitive level. A burst mode of 0.8192 X 1012 binary operations per sec has been reliably demonstrated. The prototype has been successful in demonstrating general purpose computation. In addition to emulating the RISC instruction set within the UNIX operating environment, relational database text search operations have been implemented on DOC II.

  6. Conceptual metaphors in computer networking terminology ...

    African Journals Online (AJOL)

    Lakoff & Johnson, 1980) is used as a basic framework for analysing and explaining the occurrence of metaphor in the terminology used by computer networking professionals in the information technology (IT) industry. An analysis of linguistic ...

  7. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  8. RESEARCH OF ENGINEERING TRAFFIC IN COMPUTER UZ NETWORK USING MPLS TE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    V. M. Pakhomovа

    2014-12-01

    Full Text Available Purpose. In railway transport of Ukraine one requires the use of computer networks of different technologies: Ethernet, Token Bus, Token Ring, FDDI and others. In combined computer networks on the railway transport it is necessary to use packet switching technology in multiprotocol networks MPLS (MultiProtocol Label Switching more effectively. They are based on the use of tags. Packet network must transmit different types of traffic with a given quality of service. The purpose of the research is development a methodology for determining the sequence of destination flows for the considered fragment of computer network of UZ. Methodology. When optimizing traffic management in MPLS networks has the important role of technology traffic engineering (Traffic Engineering, TE. The main mechanism of TE in MPLS is the use of unidirectional tunnels (MPLS TE tunnel to specify the path of the specified traffic. The mathematical model of the problem of traffic engineering in computer network of UZ technology MPLS TE was made. Computer UZ network is represented with the directed graph, their vertices are routers of computer network, and each arc simulates communication between nodes. As an optimization criterion serves the minimum value of the maximum utilization of the TE-tunnel. Findings. The six options destination flows were determined; rational sequence of flows was found, at which the maximum utilization of TE-tunnels considered a simplified fragment of a computer UZ network does not exceed 0.5. Originality. The method of solving the problem of traffic engineering in Multiprotocol network UZ technology MPLS TE was proposed; for different classes its own way is laid, depending on the bandwidth and channel loading. Practical value. Ability to determine the values of the maximum coefficient of use of TE-tunnels in computer UZ networks based on developed software model «TraffEng». The input parameters of the model: number of routers, channel capacity, the

  9. An Overview of Computer Network security and Research Technology

    OpenAIRE

    Rathore, Vandana

    2016-01-01

    The rapid development in the field of computer networks and systems brings both convenience and security threats for users. Security threats include network security and data security. Network security refers to the reliability, confidentiality, integrity and availability of the information in the system. The main objective of network security is to maintain the authenticity, integrity, confidentiality, availability of the network. This paper introduces the details of the technologies used in...

  10. Recurrent autoassociative networks and holistic computations

    NARCIS (Netherlands)

    Stoianov, [No Value; Amari, SI; Giles, CL; Gori, M; Piuri,

    2000-01-01

    The paper presents an experimental study of holistic computations over distributed representations (DRs) of sequences developed by the Recurrent Autoassociative Networks (KAN). Three groups of holistic operators are studied: extracting symbols at fixed position, extracting symbols at a variable

  11. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  12. Unix Philosophy and the Real World: Control Software for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Neil Thomas Dantam

    2016-03-01

    Full Text Available Robot software combines the challenges of general purpose and real-time software, requiring complex logic and bounded resource use. Physical safety, particularly for dynamic systems such as humanoid robots, depends on correct software. General purpose computation has converged on unix-like operating systems -- standardized as POSIX, the Portable Operating System Interface -- for devices from cellular phones to supercomputers. The modular, multi-process design typical of POSIX applications is effective for building complex and reliable software. Absent from POSIX, however, is an interproccess communication mechanism that prioritizes newer data as typically desired for control of physical systems. We address this need in the Ach communication library which provides suitable semantics and performance for real-time robot control. Although initially designed for humanoid robots, Ach has broader applicability to complex mechatronic devices -- humanoid and otherwise -- that require real-time coupling of sensors, control, planning, and actuation. The initial user space implementation of Ach was limited in the ability to receive data from multiple sources. We remove this limitation by implementing Ach as a Linux kernel module, enabling Ach's high-performance and latest-message-favored semantics within conventional POSIX communication pipelines. We discuss how these POSIX interfaces and design principles apply to robot software, and we present a case study using the Ach kernel module for communication on the Baxter robot.

  13. Computer network defense through radial wave functions

    Science.gov (United States)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  14. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  15. Six networks on a universal neuromorphic computing substrate

    Directory of Open Access Journals (Sweden)

    Thomas ePfeil

    2013-02-01

    Full Text Available In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  16. Six networks on a universal neuromorphic computing substrate.

    Science.gov (United States)

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  17. Latest developments for a computer aided thermohydraulic network

    International Nuclear Information System (INIS)

    Alemberti, A.; Graziosi, G.; Mini, G.; Susco, M.

    1999-01-01

    Thermohydraulic networks are I-D systems characterized by a small number of basic components (pumps, valves, heat exchangers, etc) connected by pipes and limited spatially by a defined number of boundary conditions (tanks, atmosphere, etc). The network system is simulated by the well known computer program RELAPS/mod3. Information concerning the network geometry component behaviour, initial and boundary conditions are usually supplied to the RELAPS code using an ASCII input file by means of 'input cards'. CATNET (Computer Aided Thermalhydraulic NETwork) is a graphically user interface that, under specific user guidelines which completely define its range of applicability, permits a very high level of standardization and simplification of the RELAPS/mod3 input deck development process as well as of the output processing. The characteristics of the components (pipes, valves, pumps etc), defining the network system can be entered through CATNET. The CATNET interface is provided by special functions to compute form losses in the most typical bending and branching configurations. When the input of all system components is ready, CATNET is able to generate the RELAPS/mod3 input file. Finally, by means of CATNET, the RELAPS/mod3 code can be run and its output results can be transformed to an intuitive display form. The paper presents an example of application of the CATNET interface as well as the latest developments which greatly simplified the work of the users and allowed to reduce the possibility of input errors. (authors)

  18. Networking Micro-Processors for Effective Computer Utilization in Nursing

    OpenAIRE

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in...

  19. ORGANIZATION OF CLOUD COMPUTING INFRASTRUCTURE BASED ON SDN NETWORK

    Directory of Open Access Journals (Sweden)

    Alexey A. Efimenko

    2013-01-01

    Full Text Available The article presents the main approaches to cloud computing infrastructure based on the SDN network in present data processing centers (DPC. The main indexes of management effectiveness of network infrastructure of DPC are determined. The examples of solutions for the creation of virtual network devices are provided.

  20. Biophysical constraints on the computational capacity of biochemical signaling networks

    Science.gov (United States)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  1. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  2. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  3. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  4. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  5. Computing all hybridization networks for multiple binary phylogenetic input trees.

    Science.gov (United States)

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  6. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  7. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  8. The Role of Computer Networks in Aerospace Engineering.

    Science.gov (United States)

    Bishop, Ann Peterson

    1994-01-01

    Presents selected results from an empirical investigation into the use of computer networks in aerospace engineering based on data from a national mail survey. The need for user-based studies of electronic networking is discussed, and a copy of the questionnaire used in the survey is appended. (Contains 46 references.) (LRW)

  9. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  10. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    Science.gov (United States)

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  11. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  12. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  13. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    Science.gov (United States)

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  14. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  15. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    Science.gov (United States)

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  17. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  18. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  19. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  20. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    OpenAIRE

    Wang, Shuo; Zhang, Xing; Zhang, Yan; Wang, Lin; Yang, Juwo; Wang, Wenbo

    2017-01-01

    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at th...

  1. A Multi-Vehicles, Wireless Testbed for Networked Control, Communications and Computing

    Science.gov (United States)

    Murray, Richard; Doyle, John; Effros, Michelle; Hickey, Jason; Low, Steven

    2002-03-01

    We have constructed a testbed consisting of 4 mobile vehicles (with 4 additional vehicles being completed), each with embedded computing and communications capability for use in testing new approaches for command and control across dynamic networks. The system is being used or is planned to be used for testing of a variety of communications-related technologies, including distributed command and control algorithms, dynamically reconfigurable network topologies, source coding for real-time transmission of data in lossy environments, and multi-network communications. A unique feature of the testbed is the use of vehicles that have second order dynamics. Requiring real-time feedback algorithms to stabilize the system while performing cooperative tasks. The testbed was constructed in the Caltech Vehicles Laboratory and consists of individual vehicles with PC-based computation and controls, and multiple communications devices (802.11 wireless Ethernet, Bluetooth, and infrared). The vehicles are freely moving, wheeled platforms propelled by high performance dotted fairs. The room contains an access points for an 802.11 network, overhead visual sensing (to allow emulation of CI'S signal processing), a centralized computer for emulating certain distributed computations, and network gateways to control and manipulate communications traffic.

  2. In the land of the dinosaurs, how to survive experience with building of midrange computing cluster

    Energy Technology Data Exchange (ETDEWEB)

    Chevel, A E [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Lauret, J [SUNY at Stony Brook (United States)

    2001-07-01

    The authors discuss how to put into operation a midrange computing cluster for the Nuclear Chemistry Group (NCG) of the Stage University of New York at STONY Brook (SUNY-SB). The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory (BNL). The Phenix detector system produces about half a PB (or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process. The computing installation was put into operation at the beginning of the year 2000. The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under Digital Unix 4.0f (formally True Unix 64). The realization process is under discussion.

  3. In the land of the dinosaurs, how to survive experience with building of midrange computing cluster

    International Nuclear Information System (INIS)

    Chevel, A.E.; Lauret, J.

    2001-01-01

    The authors discuss how to put into operation a midrange computing cluster for the Nuclear Chemistry Group (NCG) of the Stage University of New York at STONY Brook (SUNY-SB). The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory (BNL). The Phenix detector system produces about half a PB (or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process. The computing installation was put into operation at the beginning of the year 2000. The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under Digital Unix 4.0f (formally True Unix 64). The realization process is under discussion

  4. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  5. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  6. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin Nasaruddin

    2013-09-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  7. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin

    2009-11-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  8. Network architecture test-beds as platforms for ubiquitous computing.

    Science.gov (United States)

    Roscoe, Timothy

    2008-10-28

    Distributed systems research, and in particular ubiquitous computing, has traditionally assumed the Internet as a basic underlying communications substrate. Recently, however, the networking research community has come to question the fundamental design or 'architecture' of the Internet. This has been led by two observations: first, that the Internet as it stands is now almost impossible to evolve to support new functionality; and second, that modern applications of all kinds now use the Internet rather differently, and frequently implement their own 'overlay' networks above it to work around its perceived deficiencies. In this paper, I discuss recent academic projects to allow disruptive change to the Internet architecture, and also outline a radically different view of networking for ubiquitous computing that such proposals might facilitate.

  9. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  10. Novel Ethernet Based Optical Local Area Networks for Computer Interconnection

    NARCIS (Netherlands)

    Radovanovic, Igor; van Etten, Wim; Taniman, R.O.; Kleinkiskamp, Ronny

    2003-01-01

    In this paper we present new optical local area networks for fiber-to-the-desk application. Presented networks are expected to bring a solution for having optical fibers all the way to computers. To bring the overall implementation costs down we have based our networks on short-wavelength optical

  11. A UNIX-based prototype biomedical virtual image processor

    International Nuclear Information System (INIS)

    Fahy, J.B.; Kim, Y.

    1987-01-01

    The authors have developed a multiprocess virtual image processor for the IBM PC/AT, in order to maximize image processing software portability for biomedical applications. An interprocess communication scheme, based on two-way metacode exchange, has been developed and verified for this purpose. Application programs call a device-independent image processing library, which transfers commands over a shared data bridge to one or more Autonomous Virtual Image Processors (AVIP). Each AVIP runs as a separate process in the UNIX operating system, and implements the device-independent functions on the image processor to which it corresponds. Application programs can control multiple image processors at a time, change the image processor configuration used at any time, and are completely portable among image processors for which an AVIP has been implemented. Run-time speeds have been found to be acceptable for higher level functions, although rather slow for lower level functions, owing to the overhead associated with sending commands and data over the shared data bridge

  12. Machine learning based Intelligent cognitive network using fog computing

    Science.gov (United States)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  13. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  14. Student Motivation in Computer Networking Courses

    OpenAIRE

    Wen-Jung Hsin, PhD

    2007-01-01

    This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  15. Quantum computation over the butterfly network

    International Nuclear Information System (INIS)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio

    2011-01-01

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider deterministically performing a two-qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource setting introduced by M. Hayashi [Phys. Rev. A 76, 040301(R) (2007)], which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that unitary operations can be performed without adding any entanglement resource, if and only if the unitary operations are locally unitary equivalent to controlled unitary operations. Our protocol is optimal in the sense that the unitary operations cannot be implemented if we relax the specifications of any of the channels. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.

  16. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    OpenAIRE

    Shuo Gu; Jianfeng Pei

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regula...

  17. Computer Networking Strategies for Building Collaboration among Science Educators.

    Science.gov (United States)

    Aust, Ronald

    The development and dissemination of science materials can be associated with technical delivery systems such as the Unified Network for Informatics in Teacher Education (UNITE). The UNITE project was designed to investigate ways for using computer networking to improve communications and collaboration among university schools of education and…

  18. An Efficient Algorithm for Computing Attractors of Synchronous And Asynchronous Boolean Networks

    Science.gov (United States)

    Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei

    2013-01-01

    Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems. Availability The software package is available at https://sites.google.com/site/desheng619/download. PMID:23585840

  19. Computer network for experimental research using ISDN

    International Nuclear Information System (INIS)

    Ida, Katsumi; Nakanishi, Hideya

    1997-01-01

    This report describes the development of a computer network that uses the Integrated Service Digital Network (ISDN) for real-time analysis of experimental plasma physics and nuclear fusion research. Communication speed, 64/128kbps (INS64) or 1.5Mbps (INS1500) per connection, is independent of how busy the network is. When INS-1500 is used, the communication speed, which is proportional to the public telephone connection fee, can be dynamically varied from 64kbps to 1472kbps (depending on how much data are being transferred using the Bandwidth-on-Demand (BOD) function in the ISDN Router. On-demand dial-up and time-out disconnection reduce the public telephone connection fee by 10%-97%. (author)

  20. Development of the computer system for the JT-60 negative-ion based NBI

    International Nuclear Information System (INIS)

    Kawai, Mikito; Oohara, Hiroshi; Honda, Atsushi; Kuriyama, Masaaki; Aoyagi, Tetsuo.

    1997-03-01

    The negative-ion based NBI system (N-NBI) for JT-60 is the first NBI system using a negative-ion source in the world. The N-NBI is designed do deliver a neutral beam injection power of 10 MW at 500 keV. The computer for the N-NBI system is composed of UNIX workstations and VMEbus systems, and has the functions of ion source operation and data acquisition and processing. Since a real-time operating system compatible with the UNIX is adopted for the VMEbus systems, the software development environment both for the workstation and the VMEbus system is unified with the UNIX. The software has been developed with a priority to the software required for the verification tests which are performed in accordance with the progress of the N-NBI construction. The first beam injection with the N-NBI has been conducted in March using the newly developed software, and the deuterium neutral beam injection of 350 keV, 2.5 MW has achieved as of the end of October 1996. (author)

  1. New control system for the KEK-linac

    International Nuclear Information System (INIS)

    Kamikubota, N.; Furukawa, K.; Nakahara, K.; Abe, I.; Akimoto, H.

    1993-01-01

    New control system for the KEK-Linac has been developed. Unix-based workstations and VME-bus computers are introduced. They are inter-connected with an Ethernet, which is used as a high-speed data-exchange network. New system will start the operation after October 1993. (author)

  2. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin

    2007-01-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  3. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  4. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  5. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  6. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  7. The challenge of networked enterprises for cloud computing interoperability

    OpenAIRE

    Mezgár, István; Rauschecker, Ursula

    2014-01-01

    Manufacturing enterprises have to organize themselves into effective system architectures forming different types of Networked Enterprises (NE) to match fast changing market demands. Cloud Computing (CC) is an important up to date computing concept for NE, as it offers significant financial and technical advantages beside high-level collaboration possibilities. As cloud computing is a new concept the solutions for handling interoperability, portability, security, privacy and standardization c...

  8. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  9. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  10. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  11. A complex network approach to cloud computing

    International Nuclear Information System (INIS)

    Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2016-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )

  12. Educational Technology Network: a computer conferencing system dedicated to applications of computers in radiology practice, research, and education.

    Science.gov (United States)

    D'Alessandro, M P; Ackerman, M J; Sparks, S M

    1993-11-01

    Educational Technology Network (ET Net) is a free, easy to use, on-line computer conferencing system organized and funded by the National Library of Medicine that is accessible via the SprintNet (SprintNet, Reston, VA) and Internet (Merit, Ann Arbor, MI) computer networks. It is dedicated to helping bring together, in a single continuously running electronic forum, developers and users of computer applications in the health sciences, including radiology. ET Net uses the Caucus computer conferencing software (Camber-Roth, Troy, NY) running on a microcomputer. This microcomputer is located in the National Library of Medicine's Lister Hill National Center for Biomedical Communications and is directly connected to the SprintNet and the Internet networks. The advanced computer conferencing software of ET Net allows individuals who are separated in space and time to unite electronically to participate, at any time, in interactive discussions on applications of computers in radiology. A computer conferencing system such as ET Net allows radiologists to maintain contact with colleagues on a regular basis when they are not physically together. Topics of discussion on ET Net encompass all applications of computers in radiological practice, research, and education. ET Net has been in successful operation for 3 years and has a promising future aiding radiologists in the exchange of information pertaining to applications of computers in radiology.

  13. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  14. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  15. Contribution to data acquisition software of Eurogram and Diamant multi detectors in an Unix/VXWorks environment; Contribution aux logiciels d`acquisition de donnees des multidetecteurs Eurogam et Diamant dans un environnement reparti Unix/VXWorks

    Energy Technology Data Exchange (ETDEWEB)

    Diarra, C

    1994-06-01

    Questions on nuclear matter, need to have new performant equipments. Eurogram is a 4 PI gamma radiations multi detector and a precious tool in gamma spectroscopy, but it is necessary to use a charged particles detector and in this aim Diamant is an Eurogram partner. These two multi detectors needed special software data acquisition systems. The whole of acquisition control and management is based on sun stations with unix system. 56 figs., 64 refs.

  16. [Network Design of the Spaceport Command and Control System

    Science.gov (United States)

    Teijeiro, Antonio

    2017-01-01

    I helped the Launch Control System (LCS) hardware team sustain the network design of the Spaceport Command and Control System. I wrote the procedure that will be used to satisfy an official hardware test for the hardware carrying data from the Launch Vehicle. I installed hardware and updated design documents in support of the ongoing development of the Spaceport Command and Control System and applied firewall experience I gained during my spring 2017 semester to inspect and create firewall security policies as requested. Finally, I completed several online courses concerning networking fundamentals and Unix operating systems.

  17. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  18. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  19. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    International Nuclear Information System (INIS)

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements

  20. A worldwide flock of Condors : load sharing among workstation clusters

    NARCIS (Netherlands)

    Epema, D.H.J.; Livny, M.; Dantzig, van R.; Evers, X.; Pruyne, J.

    1996-01-01

    Condor is a distributed batch system for sharing the workload of compute-intensive jobs in a pool of unix workstations connected by a network. In such a Condor pool, idle machines are spotted by Condor and allocated to queued jobs, thus putting otherwise unutilized capacity to efficient use. When

  1. State of the Art of Network Security Perspectives in Cloud Computing

    Science.gov (United States)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  2. A contribution to the test software for the VXI electronic cards of the Eurogam multidetector in a Unix/VXWorks distributed environment

    International Nuclear Information System (INIS)

    Kadionik, P.

    1992-01-01

    The Eurogam gamma ray multidetector involves, in a first phase, 45 hyper pure Ge detectors, each surrounded by an Anti Compton shield of 10 BGO detectors. In order to ensure the highest reliability and an easy upgrade of the array, the electronic cards have been designed in the new VXI (VME Bus Extension to Instrumentation) standard; this allows to drive the 495 detectors with 4300 parameters to be adjusted by software. The data acquisition architecture is distributed on an Ethernet network. The software for set up and tests of the VXI cards have been written in C, it uses a real time kernel (VxWorks from Wind River Systems) interfaced to the Sun Unix environment. The inter-tasks communications use the Remote Procedure Calls protocol. The inner-shell of the software is connected to a data base and to a graphic interface which allows the engineers or physicists to have a very easy set-up for so many parameters to adjust

  3. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    Science.gov (United States)

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the

  4. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    Science.gov (United States)

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  5. Test experience on an ultrareliable computer communication network

    Science.gov (United States)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  6. Cloud Computing Services for Seismic Networks

    Science.gov (United States)

    Olson, Michael

    This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN---the Community Seismic Network---which uses relatively low-cost sensors deployed by members of the community, and (2) SAF---the Situation Awareness Framework---which integrates data from multiple sources, including the CSN, CISN---the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California---and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

  7. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    Science.gov (United States)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  8. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  9. AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

    OpenAIRE

    Mrs. Amandeep Kaur

    2011-01-01

    This paper highlights some of the basic concepts of QoS. The major research areas of Quality of Service Computer Networks are highlighted. The paper also compares some of the current QoS Routing techniques.

  10. Computer network data communication controller for the Plutonium Protection System (PPS)

    International Nuclear Information System (INIS)

    Rogers, M.S.

    1978-10-01

    Systems which employ several computers for distributed processing must provide communication links between the computers to effectively utilize their capacity. The technique of using a central network controller to supervise and route messages on a multicomputer digital communications net has certain economic and performance advantages over alternative implementations. Conceptually, the number of stations (computers) which can be accommodated by such a controller is unlimited, but practical considerations dictate a maximum of about 12 to 15. A Data Network Controller (DNC) has been designed around a M6800 microprocessor for use in the Plutonium Protection System (PPS) demonstration facilities

  11. TkPl_SU: An Open-source Perl Script Builder for Seismic Unix

    Science.gov (United States)

    Lorenzo, J. M.

    2017-12-01

    TkPl_SU (beta) is a graphical user interface (GUI) to select parameters for Seismic Unix (SU) modules. Seismic Unix (Stockwell, 1999) is a widely distributed free software package for processing seismic reflection and signal processing. Perl/Tk is a mature, well-documented and free object-oriented graphical user interface for Perl. In a classroom environment, shell scripting of SU modules engages students and helps focus on the theoretical limitations and strengths of signal processing. However, complex interactive processing stages, e.g., selection of optimal stacking velocities, killing bad data traces, or spectral analysis requires advanced flows beyond the scope of introductory classes. In a research setting, special functionality from other free seismic processing software such as SioSeis (UCSD-NSF) can be incorporated readily via an object-oriented style to programming. An object oriented approach is a first step toward efficient extensible programming of multi-step processes, and a simple GUI simplifies parameter selection and decision making. Currently, in TkPl_SU, Perl 5 packages wrap 19 of the most common SU modules that are used in teaching undergraduate and first-year graduate student classes (e.g., filtering, display, velocity analysis and stacking). Perl packages (classes) can advantageously add new functionality around each module and clarify parameter names for easier usage. For example, through the use of methods, packages can isolate the user from repetitive control structures, as well as replace the names of abbreviated parameters with self-describing names. Moose, an extension of the Perl 5 object system, greatly facilitates an object-oriented style. Perl wrappers are self-documenting via Perl programming document markup language.

  12. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    Science.gov (United States)

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  13. Computer network prepared to handle massive data flow

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an internationl network of computer centers, including one operated jointly by the University of Chicago and Indiana University." (2 pages)

  14. Embedded, everywhere: a research agenda for networked systems of embedded computers

    National Research Council Canada - National Science Library

    Committee on Networked Systems of Embedded Computers; National Research Council Staff; Division on Engineering and Physical Sciences; Computer Science and Telecommunications Board; National Academy of Sciences

    2001-01-01

    .... Embedded, Everywhere explores the potential of networked systems of embedded computers and the research challenges arising from embedding computation and communications technology into a wide variety of applicationsâ...

  15. Distinguishing humans from computers in the game of go: A complex network approach

    Science.gov (United States)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  16. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 47: The value of computer networks in aerospace

    Science.gov (United States)

    Bishop, Ann Peterson; Pinelli, Thomas E.

    1995-01-01

    This paper presents data on the value of computer networks that were obtained from a national survey of 2000 aerospace engineers that was conducted in 1993. Survey respondents reported the extent to which they used computer networks in their work and communication and offered their assessments of the value of various network types and applications. They also provided information about the positive impacts of networks on their work, which presents another perspective on value. Finally, aerospace engineers' recommendations on network implementation present suggestions for increasing the value of computer networks within aerospace organizations.

  17. CEBAF control system

    International Nuclear Information System (INIS)

    Bork, R.; Grubb, C.; Lahti, G.; Navarro, E.; Sage, J.

    1989-01-01

    A logic-based computer control system is in development at CEBAF. This Unix/C language software package, running on a distributed, hierarchical system of workstation and supervisory minicomputers, interfaces to hardware via CAMAC. Software aspects to be covered are ladder logic, interactive database generation, networking, and graphic user interfaces. 1 fig

  18. Computer-Supported Modelling of Multi modal Transportation Networks Rationalization

    Directory of Open Access Journals (Sweden)

    Ratko Zelenika

    2007-09-01

    Full Text Available This paper deals with issues of shaping and functioning ofcomputer programs in the modelling and solving of multimoda Itransportation network problems. A methodology of an integrateduse of a programming language for mathematical modellingis defined, as well as spreadsheets for the solving of complexmultimodal transportation network problems. The papercontains a comparison of the partial and integral methods ofsolving multimodal transportation networks. The basic hypothesisset forth in this paper is that the integral method results inbetter multimodal transportation network rationalization effects,whereas a multimodal transportation network modelbased on the integral method, once built, can be used as the basisfor all kinds of transportation problems within multimodaltransport. As opposed to linear transport problems, multimodaltransport network can assume very complex shapes. This papercontains a comparison of the partial and integral approach totransp01tation network solving. In the partial approach, astraightforward model of a transp01tation network, which canbe solved through the use of the Solver computer tool within theExcel spreadsheet inteiface, is quite sufficient. In the solving ofa multimodal transportation problem through the integralmethod, it is necessmy to apply sophisticated mathematicalmodelling programming languages which supp01t the use ofcomplex matrix functions and the processing of a vast amountof variables and limitations. The LINGO programming languageis more abstract than the Excel spreadsheet, and it requiresa certain programming knowledge. The definition andpresentation of a problem logic within Excel, in a manner whichis acceptable to computer software, is an ideal basis for modellingin the LINGO programming language, as well as a fasterand more effective implementation of the mathematical model.This paper provides proof for the fact that it is more rational tosolve the problem of multimodal transportation networks by

  19. Coping with distributed computing

    International Nuclear Information System (INIS)

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given

  20. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  1. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  2. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  3. The ENSDF radioactivity data base for IBM-PC and computer network access

    International Nuclear Information System (INIS)

    Ekstroem, P.; Spanier, L.

    1989-08-01

    A data base system for radioactivity gamma rays is described. A base with approximately 15000 gamma rays from 2777 decays is available for installation on the hard disk of a PC, and a complete system with approximately 73000 gamma rays is available for on-line access via the NORDic University computer NETwork (NORDUNET) and the Swedish University computer NETwork (SUNET)

  4. [Use of personal computers by diplomats of anesthesiology in Japan].

    Science.gov (United States)

    Yamamoto, K; Ohmura, S; Tsubokawa, T; Kita, M; Kushida, Y; Kobayashi, T

    1999-04-01

    Use of personal computers by diplomats of the Japanese Board of Anesthesiology working in Japanese university hospitals was investigated. Unsigned questionnaires were returned from 232 diplomats of 18 anesthesia departments. The age of responders ranged from twenties to sixties. Personal computer systems are used by 223 diplomats (96.1%), while nine (3.9%) do not use them. The computer systems used are: Apple Macintosh 77%, IBM compatible PC 21% and UNIX 2%. Although 197 diplomats have e-mail addresses, only 162 of them actually send and receive e-mails. Diplomats in fifties use e-mail most actively and those in sixties come second.

  5. DB2 9 for Linux, Unix, and Windows database administration certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    In DB2 9 for Linux, UNIX, and Windows Database Administration Certification Study Guide, Roger E. Sanders-one of the world's leading DB2 authors and an active participant in the development of IBM's DB2 certification exams-covers everything a reader needs to know to pass the DB2 9 UDB DBA Certification Test (731).This comprehensive study guide steps you through all of the topics that are covered on the test, including server management, data placement, database access, analyzing DB2 activity, DB2 utilities, high availability, security, and much more. Each chapter contains an extensive set of p

  6. Synthetic tetracycline-inducible regulatory networks: computer-aided design of dynamic phenotypes

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2007-01-01

    Full Text Available Abstract Background Tightly regulated gene networks, precisely controlling the expression of protein molecules, have received considerable interest by the biomedical community due to their promising applications. Among the most well studied inducible transcription systems are the tetracycline regulatory expression systems based on the tetracycline resistance operon of Escherichia coli, Tet-Off (tTA and Tet-On (rtTA. Despite their initial success and improved designs, limitations still persist, such as low inducer sensitivity. Instead of looking at these networks statically, and simply changing or mutating the promoter and operator regions with trial and error, a systematic investigation of the dynamic behavior of the network can result in rational design of regulatory gene expression systems. Sophisticated algorithms can accurately capture the dynamical behavior of gene networks. With computer aided design, we aim to improve the synthesis of regulatory networks and propose new designs that enable tighter control of expression. Results In this paper we engineer novel networks by recombining existing genes or part of genes. We synthesize four novel regulatory networks based on the Tet-Off and Tet-On systems. We model all the known individual biomolecular interactions involved in transcription, translation, regulation and induction. With multiple time-scale stochastic-discrete and stochastic-continuous models we accurately capture the transient and steady state dynamics of these networks. Important biomolecular interactions are identified and the strength of the interactions engineered to satisfy design criteria. A set of clear design rules is developed and appropriate mutants of regulatory proteins and operator sites are proposed. Conclusion The complexity of biomolecular interactions is accurately captured through computer simulations. Computer simulations allow us to look into the molecular level, portray the dynamic behavior of gene regulatory

  7. Paralelno umrežavanje računara / Parallel networking of the computers

    Directory of Open Access Journals (Sweden)

    Milojko Jevtović

    2007-04-01

    Full Text Available U radu je izložena originalna koncepcija tehničkog rešenja paralelnog umrežavanja računara, kao i lokalnih računarskih mreža (LAN - Local Area Network, odnosno povezivanje i istovremena komunikacija preko više različitih transportnih telekomunikacionih mreža. Opisano je jedno rešenje paralelnog umrežavanja, kojim je omogućen pouzdani prenos multimedijalnog saobraćaja i prenos podataka u realnom vremenu između računara ili LAN istovremeno preko N (N = 1, 2, 3, 4,.. različitih, međusobno nezavisnih mreža širokog prostranstva (WAN - Wide Area Network. Paralelno umrežavanje zasnovano je na korišćenju univerzalnog modema, čije je rešenje, takođe ukratko predstavljeno. / In this paper, new concept for parallel networking of the computers or LANs over different WAN telecommunications networks, is presented. One solution of the parallel networks, which enables reliable transfer of multimedia traffic and data transmission in real time between a computer of LAN via N (N = 1, 2 3, 4,… different inter-connected Wide Area Network. Connections between computers or LANs and wide area networks are realized using universal modems whose solution has also been presented.

  8. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  9. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  10. GKIN: a tool for drawing genetic networks

    Directory of Open Access Journals (Sweden)

    Jonathan Arnold

    2012-03-01

    Full Text Available We present GKIN, a simulator and a comprehensive graphical interface where one can draw the model specification of reactions between hypothesized molecular participants in a gene regulatory and biochemical reaction network (or genetic network for short. The solver is written in C++ in a nearly platform independentmanner to simulate large ensembles of models, which can run on PCs, Macintoshes, and UNIX machines, and its graphical user interface is written in Java which can run as a standalone or WebStart application. The drawing capability for rendering a network significantly enhances the ease of use of other reaction network simulators, such as KINSOLVER (Aleman-Meza et al., 2009 and enforces a correct semantic specification of the network. In a usability study with novice users, drawing the network with GKIN was preferred and faster in comparison with entry with a dialog-box guided interface in COPASI (Hoops, et al., 2006 with no difference in error rates between GKIN and COPASI in specifying the network. GKIN is freely available at http://faculty.cs.wit.edu/~ldeligia/PROJECTS/GKIN/.

  11. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    Science.gov (United States)

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  12. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  13. Main control computer security model of closed network systems protection against cyber attacks

    Science.gov (United States)

    Seymen, Bilal

    2014-06-01

    The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.

  14. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    Science.gov (United States)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  15. Innovations and advances in computing, informatics, systems sciences, networking and engineering

    CERN Document Server

    Elleithy, Khaled

    2015-01-01

    Innovations and Advances in Computing, Informatics, Systems Sciences, Networking and Engineering  This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers from the conference proceedings of the Eighth and some selected papers of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2012 & CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  ·       Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; ·       Includes chapters in the most a...

  16. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    OpenAIRE

    Ronnie Cheung; Calvin Wan

    2011-01-01

    We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer l...

  17. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  18. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    Science.gov (United States)

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  19. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  20. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  1. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  2. Traffic Dynamics of Computer Networks

    Science.gov (United States)

    Fekete, Attila

    2008-10-01

    Two important aspects of the Internet, namely the properties of its topology and the characteristics of its data traffic, have attracted growing attention of the physics community. My thesis has considered problems of both aspects. First I studied the stochastic behavior of TCP, the primary algorithm governing traffic in the current Internet, in an elementary network scenario consisting of a standalone infinite-sized buffer and an access link. The effect of the fast recovery and fast retransmission (FR/FR) algorithms is also considered. I showed that my model can be extended further to involve the effect of link propagation delay, characteristic of WAN. I continued my thesis with the investigation of finite-sized semi-bottleneck buffers, where packets can be dropped not only at the link, but also at the buffer. I demonstrated that the behavior of the system depends only on a certain combination of the parameters. Moreover, an analytic formula was derived that gives the ratio of packet loss rate at the buffer to the total packet loss rate. This formula makes it possible to treat buffer-losses as if they were link-losses. Finally, I studied computer networks from a structural perspective. I demonstrated through fluid simulations that the distribution of resources, specifically the link bandwidth, has a serious impact on the global performance of the network. Then I analyzed the distribution of edge betweenness in a growing scale-free tree under the condition that a local property, the in-degree of the "younger" node of an arbitrary edge, is known in order to find an optimum distribution of link capacity. The derived formula is exact even for finite-sized networks. I also calculated the conditional expectation of edge betweenness, rescaled for infinite networks.

  3. Migration of the Almaraz NPP integrated operation management system to a new computer platform

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1996-01-01

    In all power plants, it becomes necessary, with the passage of time, to migrate the initial operation management systems to adapt them to current technologies. That is a good time to improve the inclusion of data in the corporative database and standardize the system interfaces and operation, whilst maintaining data system operability. This article contains Almaraz experience in migrating its Integrated Operation Management System to an advanced computer platform based on open systems (UNIX), communications network (ETHERNET) and database (ORACLE). To this effect, clear objectives and strict standards were established to facilitate the work. The most noteworthy results obtained are: Better quality of information and structure in the corporative database Standardised user interface in all applications. Joint migration of applications for Maintenance, Components and Spare parts, Warehouses and Purchases. Integration of new applications into the system. Introduction of the navigator, which allows movement around the database using all available applications. (Author)

  4. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  5. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  6. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...... will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need...

  7. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  8. Contribution to data acquisition software of Eurogram and Diamant multi detectors in an Unix/VXWorks environment

    International Nuclear Information System (INIS)

    Diarra, C.

    1994-06-01

    Questions on nuclear matter, need to have new performant equipments. Eurogram is a 4 PI gamma radiations multi detector and a precious tool in gamma spectroscopy, but it is necessary to use a charged particles detector and in this aim Diamant is an Eurogram partner. These two multi detectors needed special software data acquisition systems. The whole of acquisition control and management is based on sun stations with unix system. 56 figs., 64 refs

  9. Correlation between Academic and Skills-Based Tests in Computer Networks

    Science.gov (United States)

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  10. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  11. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  12. STAR- A SIMPLE TOOL FOR AUTOMATED REASONING SUPPORTING HYBRID APPLICATIONS OF ARTIFICIAL INTELLIGENCE (UNIX VERSION)

    Science.gov (United States)

    Borchardt, G. C.

    1994-01-01

    /output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.

  13. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  14. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  15. Computer Network Attack Versus Operational Maneuver from the Sea

    National Research Council Canada - National Science Library

    Herdegen, Dale

    2000-01-01

    ...) vulnerable to computer network attack (CNA). Mission command and control can reduce the impact of the loss of command and control, but it can not overcome the vast and complex array of threats...

  16. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  17. Personal computer local networks report

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. Since the first microcomputer local networks of the late 1970's and early 80's, personal computer LANs have expanded in popularity, especially since the introduction of IBMs first PC in 1981. The late 1980s has seen a maturing in the industry with only a few vendors maintaining a large share of the market. This report is intended to give the reader a thorough understanding of the technology used to build these systems ... from cable to chips ... to ... protocols to servers. The report also fully defines PC LANs and the marketplace, with in-

  18. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  19. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.R.; White, R.C.

    1994-01-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory (SSCL). In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  20. THE IMPROVEMENT OF COMPUTER NETWORK PERFORMANCE WITH BANDWIDTH MANAGEMENT IN KEMURNIAN II SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Bayu Kanigoro

    2012-05-01

    Full Text Available This research describes the improvement of computer network performance with bandwidth management in Kemurnian II Senior High School. The main issue of this research is the absence of bandwidth division on computer, which makes user who is downloading data, the provided bandwidth will be absorbed by the user. It leads other users do not get the bandwidth. Besides that, it has been done IP address division on each room, such as computer, teacher and administration room for supporting learning process in Kemurnian II Senior High School, so wireless network is needed. The method is location observation and interview with related parties in Kemurnian II Senior High School, the network analysis has run and designed a new topology network including the wireless network along with its configuration and separation bandwidth on microtic router and its limitation. The result is network traffic on Kemurnian II Senior High School can be shared evenly to each user; IX and IIX traffic are separated, which improve the speed on network access at school and the implementation of wireless network.Keywords: Bandwidth Management; Wireless Network

  1. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...

  2. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    Science.gov (United States)

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  3. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  4. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  5. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  6. Zaštita računarskih mreža / Protection of computer networks

    Directory of Open Access Journals (Sweden)

    Milojko Jevtović

    2005-09-01

    Full Text Available U radu su obrađene metode napada, oblici ugrožavanja i vrste pretnji kojima su izložene računarske mreže, kao i moguće metode i tehnička rešenja za zaštitu mreža. Analizirani su efekti pretnji kojima mogu biti izložene računarske mreže i informacije koje se preko njih prenose. Opisana su određena tehnička rešenja koja obezbeđuju potreban nivo zaštite računarskih mreža, kao i mere za zaštitu informacija koje se preko njih prenose. Navedeni su standardi koji se odnose na metode i procedure kriptozaštite informacija u računarskim mrežama. U radu je naveden primer zaštite jedne lokalne računarske mreže. / In this paper different methods of attacks, threats and different forms of dangers to the computer networks are described. The possible models and technical solutions for networks protection are also given. The effects of threats directed to the computer networks and their information are analyzed certain technical solutions that provide necessary protection level of the computer networks as well as measures for information protection are also described. The standards for methods and security procedure for the information in computer networks are enlisted. There is also an example of protecting one local data network (in this paper.

  7. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  8. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  9. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  10. Development of a tracer transport option for the NAPSAC fracture network computer code

    International Nuclear Information System (INIS)

    Herbert, A.W.

    1990-06-01

    The Napsac computer code predicts groundwater flow through fractured rock using a direct fracture network approach. This paper describes the development of a tracer transport algorithm for the NAPSAC code. A very efficient particle-following approach is used enabling tracer transport to be predicted through large fracture networks. The new algorithm is tested against three test examples. These demonstrations confirm the accuracy of the code for simple networks, where there is an analytical solution to the transport problem, and illustrates the use of the computer code on a more realistic problem. (author)

  11. Computational study of noise in a large signal transduction network

    Directory of Open Access Journals (Sweden)

    Ruohonen Keijo

    2011-06-01

    Full Text Available Abstract Background Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. Results We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. Conclusions We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies.

  12. Application of local computer networks in nuclear-physical experiments and technology

    International Nuclear Information System (INIS)

    Foteev, V.A.

    1986-01-01

    The bases of construction, comparative performance and potentialities of local computer networks with respect to their application in physical experiments are considered. The principle of operation of local networks is shown on the basis of the Ethernet network and the results of analysis of their operating performance are given. The examples of operating local networks in the area of nuclear-physics research and nuclear technology are presented as follows: networks of Japan Atomic Energy Research Institute, California University and Los Alamos National Laboratory, network realization according to the DECnet and Fast-bus programs, home network configurations of the USSR Academy of Sciences and JINR Neutron Physical Laboratory etc. It is shown that local networks allows significantly raise productivity in the sphere of data processing

  13. FY 1999 Blue Book: Computing, Information, and Communications: Networked Computing for the 21st Century

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — U.S.research and development R and D in computing, communications, and information technologies has enabled unprecedented scientific and engineering advances,...

  14. Cloud Computing Application of Personal Information's Security in Network Sales-channels

    OpenAIRE

    Sun Qiong; Min Liu; Shiming Pang

    2013-01-01

    With the promotion of Internet sales, the security of personal information to network users have become increasingly demanding. The existing network of sales channels has personal information security risks, vulnerable to hacker attacking. Taking full advantage of cloud security management strategy, cloud computing security management model is introduced to the network sale of personal information security applications, which is to solve the problem of information leakage. Then we proposed me...

  15. Service-oriented Software Defined Optical Networks for Cloud Computing

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Ji, Yuefeng

    2017-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.

  16. Including Internet insurance as part of a hospital computer network security plan.

    Science.gov (United States)

    Riccardi, Ken

    2002-01-01

    Cyber attacks on a hospital's computer network is a new crime to be reckoned with. Should your hospital consider internet insurance? The author explains this new phenomenon and presents a risk assessment for determining network vulnerabilities.

  17. A constructive logic for services and information flow in computer networks

    NARCIS (Netherlands)

    Borghuis, V.A.J.; Feijs, L.M.G.

    2000-01-01

    In this paper we introduce a typed -calculus in which computer networks can be formalized and directed at situations where the services available on the network are stationary, while the information can flow freely. For this calculus, an analogue of the ‘propositions-as-types ’interpretation of

  18. The Business Perspective of Cloud Computing: Actors, Roles, and Value Networks

    OpenAIRE

    Leimeister, Stefanie;Riedl, Christoph;Böhm, Markus;Krcmar, Helmut

    2014-01-01

    With the rise of a ubiquitous provision of computing resources over the past years, cloud computing has been established as a prominent research topic. Many researchers, however, focus exclusively on the technical aspects of cloud computing, thereby neglecting the business opportunities and potentials cloud computing can offer. Enabled through this technology, new market players and business value networks arise and break up the traditional value chain of service provision. The focus of this ...

  19. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    OpenAIRE

    Junping Dong; Qingyu Xiong; Junhao Wen; Peng Li

    2014-01-01

    Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functi...

  20. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  1. The status of computing and means of local and external networking at JINR

    Energy Technology Data Exchange (ETDEWEB)

    Dorokhin, A T; Shirikov, V P

    1996-12-31

    The goal of this report is to represent a view of the current state of computer support at JINR different physical researches. JINR network and its applications are considered. Trends of local networks and the connectivity with global networks are discussed. 3 refs.

  2. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  3. Developing computer systems to support emergency operations: Standardization efforts by the Department of Energy and implementation at the DOE Savannah River Site

    International Nuclear Information System (INIS)

    DeBusk, R.E.; Fulton, G.J.; O'Dell, J.J.

    1990-01-01

    This paper describes the development of standards for emergency operations computer systems for the US Department of Energy (DOE). The proposed DOE computer standards prescribe the necessary power and simplicity to meet the expanding needs of emergency managers. Standards include networked UNIX workstations based on the client server model and software that presents information graphically using icons and windowing technology. DOE standards are based on those of the computer industry although Proposed DOE is implementing the latest technology to ensure a solid base for future growth. A case of how these proposed standards are being implemented is also presented. The Savannah River Site (SRS), a DOE facility near Aiken, South Carolina is automating a manual information system, proven over years of development. This system is generalized as a model that can apply to most, if not all, Emergency Operations Centers. This model can provide timely and validated information to emergency managers. By automating this proven system, the system is made easier to use. As experience in the case study demonstrates, computers are only an effective information tool when used as part of a proven process

  4. Operator decision aid for breached fuel operation in liquid metal cooled nuclear reactors

    International Nuclear Information System (INIS)

    Gross, K.C.; Hawkins, R.E.; Nickless, W.K.

    1991-01-01

    The purpose of this paper is to report the development of an expert system that provides continuous assessment of the safety significance and technical specification conformance of Delayed Neutron (DN) signals during breached fuel operation. The completed expert system has been parallelized on an innovative distributed-memory network-computing system that enables the computationally intensive kernel of the expert system to run in parallel on a group of low-cost Unix workstations. 1 ref

  5. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  6. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  7. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  8. Why do Reservoir Computing Networks Predict Chaotic Systems so Well?

    Science.gov (United States)

    Lu, Zhixin; Pathak, Jaideep; Girvan, Michelle; Hunt, Brian; Ott, Edward

    Recently a new type of artificial neural network, which is called a reservoir computing network (RCN), has been employed to predict the evolution of chaotic dynamical systems from measured data and without a priori knowledge of the governing equations of the system. The quality of these predictions has been found to be spectacularly good. Here, we present a dynamical-system-based theory for how RCN works. Basically a RCN is thought of as consisting of three parts, a randomly chosen input layer, a randomly chosen recurrent network (the reservoir), and an output layer. The advantage of the RCN framework is that training is done only on the linear output layer, making it computationally feasible for the reservoir dimensionality to be large. In this presentation, we address the underlying dynamical mechanisms of RCN function by employing the concepts of generalized synchronization and conditional Lyapunov exponents. Using this framework, we propose conditions on reservoir dynamics necessary for good prediction performance. By looking at the RCN from this dynamical systems point of view, we gain a deeper understanding of its surprising computational power, as well as insights on how to design a RCN. Supported by Army Research Office Grant Number W911NF1210101.

  9. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  10. EPRI engineering workstation software - Discussion and demonstration

    International Nuclear Information System (INIS)

    Stewart, R.P.; Peterson, C.E.; Agee, L.J.

    1992-01-01

    Computing technology is undergoing significant changes with respect to engineering applications in the electric utility industry. These changes result mainly from the introduction of several UNIX workstations that provide mainframe calculational capability at much lower costs. The workstations are being coupled with microcomputers through local area networks to provide engineering groups with a powerful and versatile analysis capability. PEGASYS, the Professional Engineering Graphic Analysis System, is a software package for use with engineering analysis codes executing in a workstation environment. PEGASYS has a menu driven, user-friendly interface to provide pre-execution support for preparing unput and graphical packages for post-execution analysis and on-line monitoring capability for engineering codes. The initial application of this software is for use with RETRAN-02 operating on an IBM RS/6000 workstation using X-Windows/UNIX and a personal computer under DOS

  11. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Introduction to Computational Physics for Undergraduates

    Science.gov (United States)

    Zubairi, Omair; Weber, Fridolin

    2018-03-01

    This is an introductory textbook on computational methods and techniques intended for undergraduates at the sophomore or junior level in the fields of science, mathematics, and engineering. It provides an introduction to programming languages such as FORTRAN 90/95/2000 and covers numerical techniques such as differentiation, integration, root finding, and data fitting. The textbook also entails the use of the Linux/Unix operating system and other relevant software such as plotting programs, text editors, and mark up languages such as LaTeX. It includes multiple homework assignments.

  13. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    Science.gov (United States)

    Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai

    2015-05-01

    An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  14. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  15. A new approach in development of data flow control and investigation system for computer networks

    International Nuclear Information System (INIS)

    Frolov, I.; Vaguine, A.; Silin, A.

    1992-01-01

    This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)

  16. Improving Reliability in a Stochastic Communication Network

    Science.gov (United States)

    1990-12-01

    and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created

  17. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  18. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  19. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  20. Finding Multi-step Attacks in Computer Networks using Heuristic Search and Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  1. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  2. The Linux operating system: An introduction

    Science.gov (United States)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  3. A Privacy-Preserving Framework for Collaborative Intrusion Detection Networks Through Fog Computing

    DEFF Research Database (Denmark)

    Wang, Yu; Xie, Lin; Li, Wenjuan

    2017-01-01

    Nowadays, cyber threats (e.g., intrusions) are distributed across various networks with the dispersed networking resources. Intrusion detection systems (IDSs) have already become an essential solution to defend against a large amount of attacks. With the development of cloud computing, a modern IDS...

  4. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  5. Effective Response to Attacks On Department of Defense Computer Networks

    National Research Council Canada - National Science Library

    Shaha, Patrick

    2001-01-01

    .... For the Commanders-in-Chief (CINCs), computer networking has proven especially useful in maintaining contact and sharing data with elements forward deployed as well as with host nation governments and agencies...

  6. Computer network security and cyber ethics

    CERN Document Server

    Kizza, Joseph Migga

    2014-01-01

    In its 4th edition, this book remains focused on increasing public awareness of the nature and motives of cyber vandalism and cybercriminals, the weaknesses inherent in cyberspace infrastructure, and the means available to protect ourselves and our society. This new edition aims to integrate security education and awareness with discussions of morality and ethics. The reader will gain an understanding of how the security of information in general and of computer networks in particular, on which our national critical infrastructure and, indeed, our lives depend, is based squarely on the individ

  7. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  8. Computer programme for control and maintenance and object oriented database: application to the realisation of an particle accelerator, the VIVITRON

    International Nuclear Information System (INIS)

    Diaz, A.

    1996-01-01

    The command and control system for the Vivitron, a new generation electrostatic particle accelerator, has been implemented using workstations and front-end computers using VME standards, the whole within an environment of UNIX/VxWorks. This architecture is distributed over an Ethernet network. Measurements and commands of the different sensors and actuators are concentrated in the front-end computers. The development of a second version of the software giving better performance and more functionality is described. X11 based communication has been utilised to transmit all the necessary informations to display parameters within the front-end computers on to the graphic screens. All other communications between processes use the Remote Procedure Call method (RPC). The conception of the system is based largely on the object oriented database O 2 which integrates a full description of equipments and the code necessary to manage it. This code is generated by the database. This innovation permits easy maintenance of the system and bypasses the need of a specialist when adding new equipments. The new version of the command and control system has been progressively installed since August 1995. (author)

  9. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  10. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    International Nuclear Information System (INIS)

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M o-dot , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above

  11. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Science.gov (United States)

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  12. Design of SPring-8 control system

    International Nuclear Information System (INIS)

    Wada, T.; Kumahara, T.; Yonehara, H.; Yoshikawa, H.; Masuda, T.; Wang Zhen

    1992-01-01

    The control system of SPring-8 facility is designed. A distributed computer system is adopted with a three-hierarchy levels. All the computers are linked by computer networks. The network of upper level is a high-speed multi-media LAN such as FDDI which links sub-system control computers, and middle are Ethernet or MAP networks which link front end processors (FEP) such as VME system. The lowest is a field level bus which links VME and controlled devices. Workstations (WS) or X-terminals are useful for man-machine interfaces. For operating system (OS), UNIX is useful for upper level computers, and real-time OS's for FEP's. We will select hardwares and OS of which specifications are close to international standards. Since recently the cost of software has become higher than that of hardware, we introduce computer aided tools as many as possible for program developments. (author)

  13. NASF transposition network: A computing network for unscrambling p-ordered vectors

    Science.gov (United States)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  14. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    Directory of Open Access Journals (Sweden)

    Bader Al-Anzi

    2015-05-01

    Full Text Available An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae. A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  15. UNIX veida mikrokodola operētājsistēma FPGA procesoram

    OpenAIRE

    Liepkalns, Ansis

    2012-01-01

    Risinājumos, kuros izmanto programmējamo loģisko mezglu masīvu (FPGA) procesorus, programmatūras pieejamība ir svarīga, lai samazinātu galaprodukta iegūšanai nepieciešamo laiku. Plašu programmatūras atbalstu ir ieguvušas UNIX veida operētājsistēmas. To kombinācija ar FPGA procesoriem spēj nodrošināt vēlamo izstrādes ātrumu. Lai apmierinātu kvalitātes prasības, tiek piedāvāts izmantot mikrokodola operētājsistēmu. Darbā tiek apskatīta sistēmas mikroshēmā izveide darbībai ar „Minix 3“ mikrokodol...

  16. A real-time data-acquisition and analysis system with distributed UNIX workstations

    International Nuclear Information System (INIS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-01-01

    A compact data-acquisition system using three RISC/UNIX TM workstations (SUN TM /SPARCstation TM ) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte x 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment. (orig.)

  17. NEPTUNIX 2: Operating on computers network - Catalogued procedures

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-06-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equations set F(x,x',1,t), where t is the independent variable, x' the derivative of x and 1 an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The numerical step, using results from a picture of the model translated in FORTRAN language, in a form fitted for the executive computer, carries out the simulmations; in this way, NEPTUNIX 2 numerical step is portable. On the opposite, the non numerical step must be executed on a series 370 IBM computer or on a compatible computer. The present manual describes NEPTUNIX 2 operating procedures when the two steps are executed on the same computer and also when the numerical step is executed on an other computer connected or not on the same computing network [fr

  18. A Human/Computer Learning Network to Improve Biodiversity Conservation and Research

    OpenAIRE

    Kelling, Steve; Gerbracht, Jeff; Fink, Daniel; Lagoze, Carl; Wong, Weng-Keen; Yu, Jun; Damoulas, Theodoros; Gomes, Carla

    2012-01-01

    In this paper we describe eBird, a citizen-science project that takes advantage of the human observational capacity to identify birds to species, which is then used to accurately represent patterns of bird occurrences across broad spatial and temporal extents. eBird employs artificial intelligence techniques such as machine learning to improve data quality by taking advantage of the synergies between human computation and mechanical computation. We call this a Human-Computer Learning Network,...

  19. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation.

    Science.gov (United States)

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency.

  20. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  1. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  2. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  3. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    DEFF Research Database (Denmark)

    Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local f...... in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo....

  4. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  5. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  6. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    CERN Document Server

    Pai, A; Dhurandhar, S V

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M sub o sub - sub d sub o sub t , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.

  7. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  8. Computational intelligence in wireless sensor networks recent advances and future challenges

    CERN Document Server

    Falcon, Rafael; Koeppen, Mario

    2017-01-01

    This book emphasizes the increasingly important role that Computational Intelligence (CI) methods are playing in solving a myriad of entangled Wireless Sensor Networks (WSN) related problems. The book serves as a guide for surveying several state-of-the-art WSN scenarios in which CI approaches have been employed. The reader finds in this book how CI has contributed to solve a wide range of challenging problems, ranging from balancing the cost and accuracy of heterogeneous sensor deployments to recovering from real-time sensor failures to detecting attacks launched by malicious sensor nodes and enacting CI-based security schemes. Network managers, industry experts, academicians and practitioners alike (mostly in computer engineering, computer science or applied mathematics) benefit from the spectrum of successful applications reported in this book. Senior undergraduate or graduate students may discover in this book some problems well suited for their own research endeavors. USP: Presents recent advances and fu...

  9. Effects Of Social Networking Sites (SNSs) On Hyper Media Computer Mediated Environments (HCMEs)

    OpenAIRE

    Yoon C. Cho

    2011-01-01

    Social Networking Sites (SNSs) are known as tools to interact and build relationships between users/customers in Hyper Media Computer Mediated Environments (HCMEs). This study explored how social networking sites play a significant role in communication between users. While numerous researchers examined the effectiveness of social networking websites, few studies investigated which factors affected customers attitudes and behavior toward social networking sites. In this paper, the authors inv...

  10. A comparative analysis on computational methods for fitting an ERGM to biological network data

    Directory of Open Access Journals (Sweden)

    Sudipta Saha

    2015-03-01

    Full Text Available Exponential random graph models (ERGM based on graph theory are useful in studying global biological network structure using its local properties. However, computational methods for fitting such models are sensitive to the type, structure and the number of the local features of a network under study. In this paper, we compared computational methods for fitting an ERGM with local features of different types and structures. Two commonly used methods, such as the Markov Chain Monte Carlo Maximum Likelihood Estimation and the Maximum Pseudo Likelihood Estimation are considered for estimating the coefficients of network attributes. We compared the estimates of observed network to our random simulated network using both methods under ERGM. The motivation was to ascertain the extent to which an observed network would deviate from a randomly simulated network if the physical numbers of attributes were approximately same. Cut-off points of some common attributes of interest for different order of nodes were determined through simulations. We implemented our method to a known regulatory network database of Escherichia coli (E. coli.

  11. INVITATION TO PERFORM Y2K TESTING UNDER UNIX

    CERN Multimedia

    CERN Y2K Co-ordinator

    1999-01-01

    IntroductionA special AFS cell Ôy2k.cern.chÕ has been established to allow service managers and users to test y2k compliance.In addition to AFS, the cluster consists of machines representing all the Unix flavours in use at CERN (AIX, DUNIX, HP-UX, IRIX, LINUX, and SOLARIS).More information can be obtained from the page: http://wwwinfo.cern.ch/pdp/bis/y2k/y2kplus.htmlTesting scheduleThe cluster will be set to 25 December 1999 on fixed days and then left running for three weeks. This gives people one week to prepare test programs in 1999 and two weeks to check the consequences of passing into year 2000. These fixed dates are set as follows:— 19 May 1999, date set to 25/12/99 (year 2000 starts on 26 May) — 9 June1999, date set to 25/12/99 (year 2000 starts on 16 June)— 30 June 1999, date set to 25/12/99 (year 2000 starts on 7 July)If more than these three sessions are needed an announcement will be made later. RegistrationThe following Web page should be used for r...

  12. Wearable computing from modeling to implementation of wearable systems based on body sensor networks

    CERN Document Server

    Fortino, Giancarlo; Galzarano, Stefano

    2018-01-01

    This book provides the most up-to-date research and development on wearable computing, wireless body sensor networks, wearable systems integrated with mobile computing, wireless networking and cloud computing. This book has a specific focus on advanced methods for programming Body Sensor Networks (BSNs) based on the reference SPINE project. It features an on-line website (http://spine.deis.unical.it) to support readers in developing their own BSN application/systems and covers new emerging topics on BSNs such as collaborative BSNs, BSN design methods, autonomic BSNs, integration of BSNs and pervasive environments, and integration of BSNs with cloud computing. The book provides a description of real BSN prototypes with the possibility to see on-line demos and download the software to test them on specific sensor platforms and includes case studies for more practical applications. * Provides a future roadmap by learning advanced technology and open research issues * Gathers the background knowledge to tackl...

  13. Compiling gate networks on an Ising quantum computer

    International Nuclear Information System (INIS)

    Bowdrey, M.D.; Jones, J.A.; Knill, E.; Laflamme, R.

    2005-01-01

    Here we describe a simple mechanical procedure for compiling a quantum gate network into the natural gates (pulses and delays) for an Ising quantum computer. The aim is not necessarily to generate the most efficient pulse sequence, but rather to develop an efficient compilation algorithm that can be easily implemented in large spin systems. The key observation is that it is not always necessary to refocus all the undesired couplings in a spin system. Instead, the coupling evolution can simply be tracked and then corrected at some later time. Although described within the language of NMR, the algorithm is applicable to any design of quantum computer based on Ising couplings

  14. DEBROS: design and use of a Linux-like RTOS on an inexpensive 8-bit single board computer

    International Nuclear Information System (INIS)

    Davis, M.A.

    2012-01-01

    As the power, complexity, and capabilities of embedded processors continue to grow, it is easy to forget just how much can be done with inexpensive Single Board Computers (SBCs) based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by the early work done on operating systems such as UNIX TM , Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System), which is a fully preemptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/UNIX compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using such inexpensive hardware. (author)

  15. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  16. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    Science.gov (United States)

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  17. Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.

    Directory of Open Access Journals (Sweden)

    Alberto Mazzoni

    2015-12-01

    Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  18. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  19. Computerized management of radiology department: Installation and use of local area network(LAN) by personal computers

    International Nuclear Information System (INIS)

    Lee, Young Joon; Han, Kook Sang; Geon, Do Ig; Sol, Chang Hyo; Kim, Byung Soo

    1993-01-01

    There is increasing need for network connecting personal computers(PC) together. Thus local area network(LAN) emerged, which was designed to allow multiple computers to access and share multiple files and programs and expensive peripheral devices and to communicate with each user. We build PC-LAN in our department that consisted of 1) hardware-9 sets of personal computers(IBM compatible 80386 DX, 1 set; 80286 AT, 8 sets) and cables and network interface cards (Ethernet compatible, 16 bits) that connected PC and peripheral devices 2) software - network operating system and database management system. We managed this network for 6 months. The benefits of PC-LAN were 1) multiuser (share multiple files and programs, peripheral devices) 2) real data processing 3) excellent expandability and flexibility, compatibility, easy connectivity 4) single cable for networking) rapid data transmission 5) simple and easy installation and management 6) using conventional PC's software running under DOS(Disk Operating System) without transformation 7) low networking cost. In conclusion, PC-lan provides an easier and more effective way to manage multiuser database system needed at hospital departments instead of more expensive and complex network of minicomputer or mainframe

  20. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Directory of Open Access Journals (Sweden)

    Qian Li

    Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation

  2. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Science.gov (United States)

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  3. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  4. Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria. Version 1.

    Science.gov (United States)

    1987-07-01

    for Secure Computer Systema, MTR-3153, The MITRE Corporation, Bedford, MA, June 1975. 1 See, for example, M. D. Abrams and H. J. Podell , Tutorial...References References Abrams, M. D. and H. J. Podell , Tutorial: Computer and Network Security, IEEE Com- puter Society Press, 1987. Addendum to the

  5. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    Science.gov (United States)

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  6. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  7. Computerization of administration and operation management in Polish black coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Mastej, R.; Syrkiewicz, J. (Centralny Osrodek Informatyki Gornictwa (Poland))

    1990-08-01

    Characterizes main solutions of the computerized management model adopted in Poland for the mining industry and the technical and oganizational structure of computer system application. Computer systems for black coal mines and the range of microprocessor application are shown in block diagrams. The COIG mining information center makes about 45 computer system modules with independent implementation available for black coal mines. The general concept foresees central data processing in the COIG center on the ODRA 1305 and ICL 297 computers with the G-3 operating system in the first stage and an ICL series 39 computer with the VME operating system in the second stage. For mines where no transmission lines are available local solutions based on smaller ICL computers, minicomputers or computer networks with the NOVELL network operating system or multi-access systems with the UNIX operating system are planned. 3 refs.

  8. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  9. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available Due to the extensive social influence, public health emergency has attracted great attention in today’s society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event’s social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback based on ACP simulation system which was successfully applied to the analysis of A (H1N1 Flu emergency.

  10. Systems approach to modeling the Token Bucket algorithm in computer networks

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  11. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  12. A three-dimensional computational model of collagen network mechanics.

    Directory of Open Access Journals (Sweden)

    Byoungkoo Lee

    Full Text Available Extracellular matrix (ECM strongly influences cellular behaviors, including cell proliferation, adhesion, and particularly migration. In cancer, the rigidity of the stromal collagen environment is thought to control tumor aggressiveness, and collagen alignment has been linked to tumor cell invasion. While the mechanical properties of collagen at both the single fiber scale and the bulk gel scale are quite well studied, how the fiber network responds to local stress or deformation, both structurally and mechanically, is poorly understood. This intermediate scale knowledge is important to understanding cell-ECM interactions and is the focus of this study. We have developed a three-dimensional elastic collagen fiber network model (bead-and-spring model and studied fiber network behaviors for various biophysical conditions: collagen density, crosslinker strength, crosslinker density, and fiber orientation (random vs. prealigned. We found the best-fit crosslinker parameter values using shear simulation tests in a small strain region. Using this calibrated collagen model, we simulated both shear and tensile tests in a large linear strain region for different network geometry conditions. The results suggest that network geometry is a key determinant of the mechanical properties of the fiber network. We further demonstrated how the fiber network structure and mechanics evolves with a local formation, mimicking the effect of pulling by a pseudopod during cell migration. Our computational fiber network model is a step toward a full biomechanical model of cellular behaviors in various ECM conditions.

  13. INTERRUPTION TO COMPUTING SERVICES, SATURDAY 9 FEBRUARY

    CERN Multimedia

    2002-01-01

    In order to allow the rerouting of electrical cables which power most of the B513 Computer Room, there will be a complete shutdown of central computing services on Saturday 9th February. This shutown affects all Central Computing services, including all NICE services (for Windows 95, Windows NT and Windows 2000), Mail and Web services, sitewide printing services, all Unix interactive and batch services, the ACB service, all AIS services and databases (such as EDH, BHT, CFU and HR), dedicated Engineering services, and general purpose database services. Services will be run down progressively from early on Saturday morning and reestablished as soon as possible, starting in the afternoon. However, it is unlikely that full computing services will be available before the Saturday evening. For operational reasons, some services may be shutdown on the evening of Friday 8th February and restarted on Monday 11th February. More detailed information about the stoppage and restart schedules will be given nearer...

  14. INTERRUPTION TO COMPUTING SERVICES, SATURDAY 9 FEBRUARY

    CERN Multimedia

    2002-01-01

    In order to allow the rerouting of electrical cables which power most of the B513 Computer Room, there will be a complete shutdown of central computing services on Saturday 9th February. This shutown affects all Central Computing services, including all NICE services (for Windows 95, Windows NT and Windows 2000), Mail and Web services, sitewide printing services, all Unix interactive and batch services, the ACB service, all AIS services and databases (such as EDH, BHT, CFU and HR) dedicated Engineering services, and general purpose database services. Services will be run down progressively from early on Saturday morning and reestablished as soon as possible, starting in the afternoon. However, it is unlikely that full computing services will be available before the Saturday evening. For operational reasons, some services may be shutdown on the evening of Friday 8th February and restarted on Monday 11th February. More detailed information about the stoppage and restart schedules will be given nearer ...

  15. Cloud and fog computing in 5G mobile networks emerging advances and applications

    CERN Document Server

    Markakis, Evangelos; Mavromoustakis, Constandinos X; Pallis, Evangelos

    2017-01-01

    This book focuses on the challenges and solutions related to cloud and fog computing for 5G mobile networks, and presents novel approaches to the frameworks and schemes that carry out storage, communication, computation and control in the fog/cloud paradigm.

  16. Computing with networks of spiking neurons on a biophysically motivated floating-gate based neuromorphic integrated circuit.

    Science.gov (United States)

    Brink, S; Nease, S; Hasler, P

    2013-09-01

    Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Social sciences via network analysis and computation

    CERN Document Server

    Kanduc, Tadej

    2015-01-01

    In recent years information and communication technologies have gained significant importance in the social sciences. Because there is such rapid growth of knowledge, methods and computer infrastructure, research can now seamlessly connect interdisciplinary fields such as business process management, data processing and mathematics. This study presents some of the latest results, practices and state-of-the-art approaches in network analysis, machine learning, data mining, data clustering and classifications in the contents of social sciences. It also covers various real-life examples such as t

  18. Physics Detector Simulation Facility (PDSF) architecture/utilization

    International Nuclear Information System (INIS)

    Scipioni, B.

    1993-05-01

    The current systems architecture for the SSCL's Physics Detector Simulation Facility (PDSF) is presented. Systems analysis data is presented and discussed. In particular, these data disclose the effectiveness of utilization of the facility for meeting the needs of physics computing, especially as concerns parallel architecture and processing. Detailed design plans for the highly networked, symmetric, parallel, UNIX workstation-based facility are given and discussed in light of the design philosophy. Included are network, CPU, disk, router, concentrator, tape, user and job capacities and throughput

  19. Informatic parcellation of the network involved in the computation of subjective value

    Science.gov (United States)

    Rangel, Antonio

    2014-01-01

    Understanding how the brain computes value is a basic question in neuroscience. Although individual studies have driven this progress, meta-analyses provide an opportunity to test hypotheses that require large collections of data. We carry out a meta-analysis of a large set of functional magnetic resonance imaging studies of value computation to address several key questions. First, what is the full set of brain areas that reliably correlate with stimulus values when they need to be computed? Second, is this set of areas organized into dissociable functional networks? Third, is a distinct network of regions involved in the computation of stimulus values at decision and outcome? Finally, are different brain areas involved in the computation of stimulus values for different reward modalities? Our results demonstrate the centrality of ventromedial prefrontal cortex (VMPFC), ventral striatum and posterior cingulate cortex (PCC) in the computation of value across tasks, reward modalities and stages of the decision-making process. We also find evidence of distinct subnetworks of co-activation within VMPFC, one involving central VMPFC and dorsal PCC and another involving more anterior VMPFC, left angular gyrus and ventral PCC. Finally, we identify a posterior-to-anterior gradient of value representations corresponding to concrete-to-abstract rewards. PMID:23887811

  20. Computer-Aided Analysis of Flow in Water Pipe Networks after a Seismic Event

    Directory of Open Access Journals (Sweden)

    Won-Hee Kang

    2017-01-01

    Full Text Available This paper proposes a framework for a reliability-based flow analysis for a water pipe network after an earthquake. For the first part of the framework, we propose to use a modeling procedure for multiple leaks and breaks in the water pipe segments of a network that has been damaged by an earthquake. For the second part, we propose an efficient system-level probabilistic flow analysis process that integrates the matrix-based system reliability (MSR formulation and the branch-and-bound method. This process probabilistically predicts flow quantities by considering system-level damage scenarios consisting of combinations of leaks and breaks in network pipes and significantly reduces the computational cost by sequentially prioritizing the system states according to their likelihoods and by using the branch-and-bound method to select their partial sets. The proposed framework is illustrated and demonstrated by examining two example water pipe networks that have been subjected to a seismic event. These two examples consist of 11 and 20 pipe segments, respectively, and are computationally modeled considering their available topological, material, and mechanical properties. Considering different earthquake scenarios and the resulting multiple leaks and breaks in the water pipe segments, the water flows in the segments are estimated in a computationally efficient manner.

  1. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  2. Local and Long Distance Computer Networking for Science Classrooms. Technical Report No. 43.

    Science.gov (United States)

    Newman, Denis

    This report describes Earth Lab, a project which is demonstrating new ways of using computers for upper-elementary and middle-school science instruction, and finding ways to integrate local-area and telecommunications networks. The discussion covers software, classroom activities, formative research on communications networks, and integration of…

  3. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    Science.gov (United States)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  4. A Cloud Theory-Based Trust Computing Model in Social Networks

    Directory of Open Access Journals (Sweden)

    Fengming Liu

    2016-12-01

    Full Text Available How to develop a trust management model and then to efficiently control and manage nodes is an important issue in the scope of social network security. In this paper, a trust management model based on a cloud model is proposed. The cloud model uses a specific computation operator to achieve the transformation from qualitative concepts to quantitative computation. Additionally, this can also be used to effectively express the fuzziness, randomness and the relationship between them of the subjective trust. The node trust is divided into reputation trust and transaction trust. In addition, evaluation methods are designed, respectively. Firstly, the two-dimension trust cloud evaluation model is designed based on node’s comprehensive and trading experience to determine the reputation trust. The expected value reflects the average trust status of nodes. Then, entropy and hyper-entropy are used to describe the uncertainty of trust. Secondly, the calculation methods of the proposed direct transaction trust and the recommendation transaction trust involve comprehensively computation of the transaction trust of each node. Then, the choosing strategies were designed for node to trade based on trust cloud. Finally, the results of a simulation experiment in P2P network file sharing on an experimental platform directly reflect the objectivity, accuracy and robustness of the proposed model, and could also effectively identify the malicious or unreliable service nodes in the system. In addition, this can be used to promote the service reliability of the nodes with high credibility, by which the stability of the whole network is improved.

  5. The Poor Man's Guide to Computer Networks and their Applications

    DEFF Research Database (Denmark)

    Sharp, Robin

    2003-01-01

    These notes for DTU course 02220, Concurrent Programming, give an introduction to computer networks, with focus on the modern Internet. Basic Internet protocols such as IP, TCP and UDP are presented, and two Internet application protocols, SMTP and HTTP, are described in some detail. Techniques...

  6. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  7. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    Energy Technology Data Exchange (ETDEWEB)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and other crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)

  8. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  9. A computational framework for the automated construction of glycosylation reaction networks.

    Science.gov (United States)

    Liu, Gang; Neelamegham, Sriram

    2014-01-01

    Glycosylation is among the most common and complex post-translational modifications identified to date. It proceeds through the catalytic action of multiple enzyme families that include the glycosyltransferases that add monosaccharides to growing glycans, and glycosidases which remove sugar residues to trim glycans. The expression level and specificity of these enzymes, in part, regulate the glycan distribution or glycome of specific cell/tissue systems. Currently, there is no systematic method to describe the enzymes and cellular reaction networks that catalyze glycosylation. To address this limitation, we present a streamlined machine-readable definition for the glycosylating enzymes and additional methodologies to construct and analyze glycosylation reaction networks. In this computational framework, the enzyme class is systematically designed to store detailed specificity data such as enzymatic functional group, linkage and substrate specificity. The new classes and their associated functions enable both single-reaction inference and automated full network reconstruction, when given a list of reactants and/or products along with the enzymes present in the system. In addition, graph theory is used to support functions that map the connectivity between two or more species in a network, and that generate subset models to identify rate-limiting steps regulating glycan biosynthesis. Finally, this framework allows the synthesis of biochemical reaction networks using mass spectrometry (MS) data. The features described above are illustrated using three case studies that examine: i) O-linked glycan biosynthesis during the construction of functional selectin-ligands; ii) automated N-linked glycosylation pathway construction; and iii) the handling and analysis of glycomics based MS data. Overall, the new computational framework enables automated glycosylation network model construction and analysis by integrating knowledge of glycan structure and enzyme biochemistry. All

  10. A computational framework for the automated construction of glycosylation reaction networks.

    Directory of Open Access Journals (Sweden)

    Gang Liu

    Full Text Available Glycosylation is among the most common and complex post-translational modifications identified to date. It proceeds through the catalytic action of multiple enzyme families that include the glycosyltransferases that add monosaccharides to growing glycans, and glycosidases which remove sugar residues to trim glycans. The expression level and specificity of these enzymes, in part, regulate the glycan distribution or glycome of specific cell/tissue systems. Currently, there is no systematic method to describe the enzymes and cellular reaction networks that catalyze glycosylation. To address this limitation, we present a streamlined machine-readable definition for the glycosylating enzymes and additional methodologies to construct and analyze glycosylation reaction networks. In this computational framework, the enzyme class is systematically designed to store detailed specificity data such as enzymatic functional group, linkage and substrate specificity. The new classes and their associated functions enable both single-reaction inference and automated full network reconstruction, when given a list of reactants and/or products along with the enzymes present in the system. In addition, graph theory is used to support functions that map the connectivity between two or more species in a network, and that generate subset models to identify rate-limiting steps regulating glycan biosynthesis. Finally, this framework allows the synthesis of biochemical reaction networks using mass spectrometry (MS data. The features described above are illustrated using three case studies that examine: i O-linked glycan biosynthesis during the construction of functional selectin-ligands; ii automated N-linked glycosylation pathway construction; and iii the handling and analysis of glycomics based MS data. Overall, the new computational framework enables automated glycosylation network model construction and analysis by integrating knowledge of glycan structure and enzyme

  11. StarTrax --- The Next Generation User Interface

    Science.gov (United States)

    Richmond, Alan; White, Nick

    StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.

  12. Computing spin networks

    International Nuclear Information System (INIS)

    Marzuoli, Annalisa; Rasetti, Mario

    2005-01-01

    We expand a set of notions recently introduced providing the general setting for a universal representation of the quantum structure on which quantum information stands. The dynamical evolution process associated with generic quantum information manipulation is based on the (re)coupling theory of SU (2) angular momenta. Such scheme automatically incorporates all the essential features that make quantum information encoding much more efficient than classical: it is fully discrete; it deals with inherently entangled states, naturally endowed with a tensor product structure; it allows for generic encoding patterns. The model proposed can be thought of as the non-Boolean generalization of the quantum circuit model, with unitary gates expressed in terms of 3nj coefficients connecting inequivalent binary coupling schemes of n + 1 angular momentum variables, as well as Wigner rotations in the eigenspace of the total angular momentum. A crucial role is played by elementary j-gates (6j symbols) which satisfy algebraic identities that make the structure of the model similar to 'state sum models' employed in discretizing topological quantum field theories and quantum gravity. The spin network simulator can thus be viewed also as a Combinatorial QFT model for computation. The semiclassical limit (large j) is discussed

  13. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    Science.gov (United States)

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  14. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    OpenAIRE

    Blanco, Bego; Taboada, Ianire; Fajardo, Jose Oscar; Liberal, Fidel

    2017-01-01

    In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-awa...

  15. An exact method for computing the frustration index in signed networks using binary programming

    OpenAIRE

    Aref, Samin; Mason, Andrew J.; Wilson, Mark C.

    2016-01-01

    Computing the frustration index of a signed graph is a key step toward solving problems in many fields including social networks, physics, material science, and biology. The frustration index determines the distance of a network from a state of total structural balance. Although the definition of the frustration index goes back to 1960, its exact algorithmic computation, which is closely related to classic NP-hard graph problems, has only become a focus in recent years. We develop three new b...

  16. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  17. POSTER: Privacy-Preserving Profile Similarity Computation in Online Social Networks

    NARCIS (Netherlands)

    Jeckmans, Arjan; Tang, Qiang; Hartel, Pieter H.

    2011-01-01

    Currently, none of the existing online social networks (OSNs) enables its users to make new friends without revealing their private information. This leaves the users in a vulnerable position when searching for new friends. We propose a solution which enables a user to compute her profile similarity

  18. Low-cost autonomous perceptron neural network inspired by quantum computation

    Science.gov (United States)

    Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud

    2017-11-01

    Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.

  19. Using the world-wide computer network, Internet, in chemical sciences

    International Nuclear Information System (INIS)

    Edvardsen, Oe.

    1995-01-01

    Modern computer and information technology has opened up many possibilities for communicating various types of information efficiently throughout the world. A non-technical introduction to some of the available resources on the computer network, Internet, is given in this paper. Hints on where to start exploring the Internet and how to obtain information are provided. Methods of communicating between scientists, how to access archives, and modern multi-media information systems are described. Several examples of services available to chemists are shown. (au) (26 refs.)

  20. Efficient quantum computation in a network with probabilistic gates and logical encoding

    DEFF Research Database (Denmark)

    Borregaard, J.; Sørensen, A. S.; Cirac, J. I.

    2017-01-01

    An approach to efficient quantum computation with probabilistic gates is proposed and analyzed in both a local and nonlocal setting. It combines heralded gates previously studied for atom or atomlike qubits with logical encoding from linear optical quantum computation in order to perform high......-fidelity quantum gates across a quantum network. The error-detecting properties of the heralded operations ensure high fidelity while the encoding makes it possible to correct for failed attempts such that deterministic and high-quality gates can be achieved. Importantly, this is robust to photon loss, which...... is typically the main obstacle to photonic-based quantum information processing. Overall this approach opens a path toward quantum networks with atomic nodes and photonic links....

  1. Artificial intelligence. Application of the Statistical Neural Networks computer program in nuclear medicine

    International Nuclear Information System (INIS)

    Stefaniak, B.; Cholewinski, W.; Tarkowska, A.

    2005-01-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer application of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. In this paper practical aspects of scientific application of ANN in medicine using the Statistical Neural Networks Computer program, were presented. Several steps of data analysis with the above ANN software package were discussed shortly, from material selection and its dividing into groups to the types of obtained results. The typical problems connected with assessing scintigrams by ANN were also described. (author)

  2. Analysis of Various Computer System Monitoring and LCD Projector through the Network TCP/IP

    Directory of Open Access Journals (Sweden)

    Santoso Budijono

    2015-09-01

    Full Text Available Many electronic devices have a network connection facility. Projectors today have network facilities to bolster its customer satisfaction in everyday use. By using a device that can be controlled, the expected availability and reliability of the presentation system (computer and projector can be maintained to keep itscondition ready to use for presentation. Nevertheless, there is still a projector device that has no network facilities so that the necessary additional equipment with expensive price. Besides, control equipment in large quantities has problems in timing and the number of technicians in performing controls. This study began with study of literature, from searching for the projectors that has LAN and software to control and finding a number of computer control softwares where the focus is easy to use and affordable. Result of this research is creating asystem which contains suggestions of procurement of computer hardware, hardware and software projectors each of which can be controlled centrally from a distance.

  3. Biological neural networks as model systems for designing future parallel processing computers

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  4. A study on optimal task decomposition of networked parallel computing using PVM

    International Nuclear Information System (INIS)

    Seong, Kwan Jae; Kim, Han Gyoo

    1998-01-01

    A numerical study is performed to investigate the effect of task decomposition on networked parallel processes using Parallel Virtual Machine (PVM). In our study, a PVM program distributed over a network of workstations is used in solving a finite difference version of a one dimensional heat equation, where natural choice of PVM programming structure would be the master-slave paradigm, with the aim of finding an optimal configuration resulting in least computing time including communication overhead among machines. Given a set of PVM tasks comprised of one master and five slave programs, it is found that there exists a pseudo-optimal number of machines, which does not necessarily coincide with the number of tasks, that yields the best performance when the network is under a light usage. Increasing the number of machines beyond this optimal one does not improve computing performance since increase in communication overhead among the excess number of machines offsets the decrease in CPU time obtained by distributing the PVM tasks among these machines. However, when the network traffic is heavy, the results exhibit a more random characteristic that is explained by the random nature of data transfer time

  5. Computer Network Availability at Sandia National Laboratories, Albuquerque NM: Measurement and Perception; TOPICAL

    International Nuclear Information System (INIS)

    NELSON, SPENCER D.; TOLENDINO, LAWRENCE F.

    1999-01-01

    The desire to provide a measure of computer network availability at Sandia National Laboratories has existed for along time. Several attempts were made to build this measure by accurately recording network failures, identifying the type of network element involved, the root cause of the problem, and the time to repair the fault. Recognizing the limitations of available methods, it became obvious that another approach of determining network availability had to be defined. The chosen concept involved the periodic sampling of network services and applications from various network locations. A measure of ''network'' availability was then calculated based on the ratio of polling success to failure. The effort required to gather the information and produce a useful metric is not prohibitive and the information gained has verified long held feelings regarding network performance with real data

  6. Identifying failure in a tree network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  7. Expansion of the TFTR neutral beam computer system

    International Nuclear Information System (INIS)

    McEnerney, J.; Chu, J.; Davis, S.; Fitzwater, J.; Fleming, G.; Funk, P.; Hirsch, J.; Lagin, L.; Locasak, V.; Randerson, L.; Schechtman, N.; Silber, K.; Skelly, G.; Stark, W.

    1992-01-01

    Previous TFTR Neutral Beam computing support was based primarily on an Encore Concept 32/8750 computer within the TFTR Central Instrumentation, Control and Data Acquisition System (CICADA). The resources of this machine were 90% utilized during a 2.5 minute duty cycle. Both interactive and automatic processes were supported, with interactive response suffering at lower priority. Further, there were additional computing requirements and no cost effective path for expansion within the Encore framework. Two elements provided a solution to these problems: improved price performance for computing and a high speed bus link to the SELBUS. The purchase of a Sun SPARCstation and a VME/SELBUS bus link, allowed offloading the automatic processing to the workstation. This paper describes the details of the system including the performance of the bus link and Sun SPARCstation, raw data acquisition and data server functions, application software conversion issues, and experiences with the UNIX operating system in the mixed platform environment

  8. CUBESIM, Hypercube and Denelcor Hep Parallel Computer Simulation

    International Nuclear Information System (INIS)

    Dunigan, T.H.

    1988-01-01

    1 - Description of program or function: CUBESIM is a set of subroutine libraries and programs for the simulation of message-passing parallel computers and shared-memory parallel computers. Subroutines are supplied to simulate the Intel hypercube and the Denelcor HEP parallel computers. The system permits a user to develop and test parallel programs written in C or FORTRAN on a single processor. The user may alter such hypercube parameters as message startup times, packet size, and the computation-to-communication ratio. The simulation generates a trace file that can be used for debugging, performance analysis, or graphical display. 2 - Method of solution: The CUBESIM simulator is linked with the user's parallel application routines to run as a single UNIX process. The simulator library provides a small operating system to perform process and message management. 3 - Restrictions on the complexity of the problem: Up to 128 processors can be simulated with a virtual memory limit of 6 million bytes. Up to 1000 processes can be simulated

  9. Reconfigurable FPGA architecture for computer vision applications in Smart Camera Networks

    OpenAIRE

    Maggiani , Luca; Salvadori , Claudio; Petracca , Matteo; Pagano , Paolo; Saletti , Roberto

    2013-01-01

    International audience; Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In such a scenario, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely support the possibility of updating the application parameters and changing the running computer vision applications at run-time. In th...

  10. Development and application of computer network for working out of researches on high energy physics

    International Nuclear Information System (INIS)

    Boos, Eh.G.; Tashimov, M.A.

    2001-01-01

    Computer network of the Physical and Technological Institute of the Ministry and Science and Education of the Republic of Kazakhstan (FTI of MSE RK) jointing a number of the research institutions, leading universities and other enterprises of Almaty city. At the present time more than 350 computers are connected to this network, the velocity of satellite channel is increased up to 192 k bit/s per one reception. The university segments of the network are separated in individual domen. A new software for analysis and proceeding of experimental data are implemented and other measures are carried out as well. However an increasing volume of information exchange between nuclear-physical center demanding the further information network development. So for providing consumers demands in information exchange in the nearest years in the paper the possibility for following measures maintenance are considered: (1) Increase of satellite channel velocity up to 1-2 M bit/s by replace of the existing SDM-100 modem on a rapid one. Now using the Kedr-M station and the CISCO-2501 tracer allowing to provide such velocity; (2) Convert of the Institute local calculation network on the new Fast Ethernet technology permitting to increase the information transmission velocity up to 100 M bit/s at the complete succession of existing Ethernet; (3) The Proxy-server (Firewaal) install at the network support assay, that giving the possibility for discharging of satellite channel and localization of segment of the network, connected with learning on the Internet not in damage to educational process. In the framework of cooperation with DESY German accelerating center with help of the indicated network the data about 2 hundred thousand deep inelastic interactions of electrons with protons measured at ZEUS detector are obtained. Data about 10 thousand of events simulated at the OPAL installation are received as well. Besides the computer network is using for operative information exchange and

  11. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  12. An efficient network for interconnecting remote monitoring instruments and computers

    International Nuclear Information System (INIS)

    Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.

    1994-01-01

    Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs

  13. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  15. Universal quantum computation in a semiconductor quantum wire network

    International Nuclear Information System (INIS)

    Sau, Jay D.; Das Sarma, S.; Tewari, Sumanta

    2010-01-01

    Universal quantum computation (UQC) using Majorana fermions on a two-dimensional topological superconducting (TS) medium remains an outstanding open problem. This is because the quantum gate set that can be generated by braiding of the Majorana fermions does not include any two-qubit gate and also no single-qubit π/8 phase gate. In principle, it is possible to create these crucial extra gates using quantum interference of Majorana fermion currents. However, it is not clear if the motion of the various order parameter defects (vortices, domain walls, etc.), to which the Majorana fermions are bound in a TS medium, can be quantum coherent. We show that these obstacles can be overcome using a semiconductor quantum wire network in the vicinity of an s-wave superconductor, by constructing topologically protected two-qubit gates and any arbitrary single-qubit phase gate in a topologically unprotected manner, which can be error corrected using magic-state distillation. Thus our strategy, using a judicious combination of topologically protected and unprotected gate operations, realizes UQC on a quantum wire network with a remarkably high error threshold of 0.14 as compared to 10 -3 to 10 -4 in ordinary unprotected quantum computation.

  16. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    Science.gov (United States)

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  17. Analyzing 3D xylem networks in Vitis vinifera using High Resolution Computed Tomography (HRCT)

    Science.gov (United States)

    Recent developments in High Resolution Computed Tomography (HRCT) have made it possible to visualize three dimensional (3D) xylem networks without time consuming, labor intensive physical sectioning. Here we describe a new method to visualize complex vessel networks in plants and produce a quantitat...

  18. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    Science.gov (United States)

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  19. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    Science.gov (United States)

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  20. Two Quarantine Models on the Attack of Malicious Objects in Computer Network

    Directory of Open Access Journals (Sweden)

    Bimal Kumar Mishra

    2012-01-01

    Full Text Available SEIQR (Susceptible, Exposed, Infectious, Quarantined, and Recovered models for the transmission of malicious objects with simple mass action incidence and standard incidence rate in computer network are formulated. Threshold, equilibrium, and their stability are discussed for the simple mass action incidence and standard incidence rate. Global stability and asymptotic stability of endemic equilibrium for simple mass action incidence have been shown. With the help of Poincare Bendixson Property, asymptotic stability of endemic equilibrium for standard incidence rate has been shown. Numerical methods have been used to solve and simulate the system of differential equations. The effect of quarantine on recovered nodes is analyzed. We have also analyzed the behavior of the susceptible, exposed, infected, quarantine, and recovered nodes in the computer network.