WorldWideScience

Sample records for acquisition computer network

  1. Automatic data-acquisition and communications computer network for fusion experiments

    International Nuclear Information System (INIS)

    Kemper, C.O.

    1981-01-01

    A network of more than twenty computers serves the data acquisition, archiving, and analysis requirements of the ISX, EBT, and beam-line test facilities at the Fusion Division of Oak Ridge National Laboratory. The network includes PDP-8, PDP-12, PDP-11, PDP-10, and Interdata 8-32 processors, and is unified by a variety of high-speed serial and parallel communications channels. While some processors are dedicated to experimental data acquisition, and others are dedicated to later analysis and theoretical work, many processors perform a combination of acquisition, real-time analysis and display, and archiving and communications functions. A network software system has been developed which runs in each processor and automatically transports data files from point of acquisition to point or points of analysis, display, and storage, providing conversion and formatting functions are required

  2. A local computer network for the experimental data acquisition at BESSY

    International Nuclear Information System (INIS)

    Buchholz, W.

    1984-01-01

    For the users of the Berlin dedicated electron storage ring for synchrotron radiation (BESSY) a local computer network has been installed: The system is designed primarily for data acquisition and offers the users a generous hardware provision combined with maximum sortware flexibility

  3. A Lossless Network for Data Acquisition

    CERN Document Server

    AUTHOR|(SzGeCERN)698154; The ATLAS collaboration; Lehmann Miotto, Giovanna

    2016-01-01

    The planned upgrades of the experiments at the Large Hadron Collider at CERN will require higher bandwidth networks for their data acquisition systems. The network congestion problem arising from the bursty many-to-one communication pattern, typical for these systems, will become more demanding. It is questionable whether commodity TCP/IP and Ethernet technologies in their current form will be still able to effectively adapt to the bursty traffic without losing packets due to the scarcity of buffers in the networking hardware. We continue our study of the idea of lossless switching in software running on commercial-off-the-shelf servers for data acquisition systems, using the ATLAS experiment as a case study. The flexibility of design in software, performance of modern computer platforms, and buffering capabilities constrained solely by the amount of DRAM memory are a strong basis for building a network dedicated to data acquisition with commodity hardware, which can provide reliable transport in congested co...

  4. Data acquisition and control network

    International Nuclear Information System (INIS)

    Hajjar, Victor.

    1983-02-01

    We have participated in the construction of the CELLO detector on the PETRA e + e - Collider in Hamburg in order to test some of the current high energy physics theories. Some 60.000 channels collecting the detector informations are connected to the main computer through the CAMAC acquisition system and specialized ROMULUS subsystems. Each of these subsystems is monitored by its dedicated microprocessor using a CAMAC dataway spy module. All these microprocessors are connected to the main computer through a ''STAR'' type network. Data are read out by the main computer (PDP11-45) and concentrated in a circular type buffer. They are then filtered and transfered to a PDP11-55, also in the network, for storing [fr

  5. Distributed multiprotocol acquisition network for environmental data

    Science.gov (United States)

    Barone, Fabrizio; De Rosa, Rosario; Milano, Leopoldo; Qipiani, Ketevan

    2003-03-01

    The acquisition and storage of large amount of data coming from distributed environmental sensors of different kind can be solved with the aid of a network between the acquisition subsystems, but problems can arise if they are not homogeneous. In this case the network should be as flexible as possible to ensure modularity and connectivity. In this work we describe the development and testing of a network based acquisition system. The network uses, where possible, commercial products, based on different standards, in order to increase the availability of its components, as well as its modularity. In addition it is completely independent from proprietary hardware and software products. In particular we tested an acquisition network based on multiple transmission protocol, like wireless and cabled RS232 and Fast Ethernet, which includes the acquisition, archiving and data analysis systems. Each acquisition subsystem can get time from satellite using GPS, and is able to monitor seismic activity, temperature, pressure, humidity and electromagnetic data. The sampling frequency and the dynamics of the acquired data can be matched to the characteristics of each probe. All the acquisition stations can use different platform as well as different operating systems. Tests have been performed to evaluate the capability of long acquisition periods and the fault tolerance of the whole system.

  6. A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2018-05-01

    Full Text Available Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC, estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification.

  7. Acquisition management of the Global Transportation Network

    Science.gov (United States)

    2001-08-02

    This report discusses the acquisition management of the Global transportation Network by the U.S. Transportation Command. This report is one in a series of audit reports addressing DoD acquisition management of information technology systems. The Glo...

  8. Acquisition Information Management system telecommunication site survey results

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A. [Oak Ridge National Lab., TN (United States); Key, B.G. [COR, Inc., Oak Ridge, TN (United States)

    1993-09-01

    The Army acquisition community currently uses a dedicated, point-to-point secure computer network for the Army Material Plan Modernization (AMPMOD). It must transition to the DOD supplied Defense Secure Network 1 (DSNET1). This is one of the first networks of this size to begin the transition. The type and amount of computing resources available at individual sites may or may not meet the new network requirements. This task surveys these existing telecommunications resources available in the Army acquisition community. It documents existing communication equipment, computer hardware, associated software, and recommends appropriate changes.

  9. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  10. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  11. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  12. New KENS data acquisition system

    International Nuclear Information System (INIS)

    Arai, M.; Furusaka, M.; Satoh, S.

    1989-01-01

    In this report, the authors discuss a data acquisition system, KENSnet, which is newly introduced to the KENS facility. The criteria for the data acquisition system was about 1 MIPS for CPU speed and 150 Mbytes for storage capacity for a computer per spectrometer. VAX computers were chosen with their propreitary operating system, VMS. The Vax computers are connected by a DECnet network mediated by Ethernet. Front-end computers, Apple Macintosh Plus and Macintosh II, were chosen for their user-friendly manipulation and intelligence. New CAMAC-based data acquisition electronics were developed. The data acquisition control program (ICP) and the general data analysis program (Genie) were both developed at ISIS and have been installed. 2 refs., 3 figs., 1 tab

  13. Ultra-low power high precision magnetotelluric receiver array based customized computer and wireless sensor network

    Science.gov (United States)

    Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.

    2016-12-01

    Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.

  14. A Lossless Network for Data Acquisition

    CERN Document Server

    AUTHOR|(SzGeCERN)698154; The ATLAS collaboration; Lehmann Miotto, Giovanna

    2017-01-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial-off-the-shelf servers, using the ATLAS experiment as a case study. In this paper we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on separate physical servers as a demonstrator.

  15. A Lossless Network for Data Acquisition

    Science.gov (United States)

    Jereczek, Grzegorz; Lehmann Miotto, Giovanna; Malone, David; Walukiewicz, Miroslaw

    2017-06-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial off-the-shelf servers, using the ATLAS experiment as a case study. In this paper, we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on distinct physical servers as a demonstrator.

  16. SAR: A fast computer for Camac data acquisition

    International Nuclear Information System (INIS)

    Bricaud, B.; Faivre, J.C.; Pain, J.

    1979-01-01

    This paper describes a special data acquisition and processing facility developed for Nuclear Physics experiments at intermediate energy installed at SATURNE (France) and at CERN (Geneva, Switzerland). Previously, we used a PDP 11/45 computer which was connected to the experiments through a Camac Branch highway. In a typical experiment (340 words per event), the computer limited the data acquisition rate at 4 μsec for each 16-bit transfer and the on-line data reduction at 20 events per second only. The initial goal of this project was to increase these two performances. Previous known acquisition processors were limited by the memory capacity these systems could support. Most of the time the data reduction was done on the host mini computer. Higher memory size can be designed with new fast RAM (Intel 2147) and the data processing can now take place on the front end processor

  17. Software Switching for High Throughput Data Acquisition Networks

    CERN Document Server

    AUTHOR|(CDS)2089787; Lehmann Miotto, Giovanna

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. The problem arising from this pattern is widely known in the literature as \\emph{incast} and can be observed as TCP throughput collapse. It is a result of overloading the switch buffers, when a specific node in a network requests data from multiple sources. This will become even more demanding for future upgrades of the experiments at the Large Hadron Collider at CERN. It is questionable whether commodity TCP/IP and Ethernet technologies in their current form will be still able to effectively adapt to bursty traffic without losing packets due to the scarcity of buffers in the networking hardware. This thesis provides an analysis of TCP/IP performance in data acquisition networks and presents a novel approach to incast congestion in these networks based on software-based packet forwarding. Our first contribution lies in confirming the strong analogies bet...

  18. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  19. Management and organisational barriers in the acquisition of computer usage skills by mature age workers.

    Science.gov (United States)

    Keogh, Mark

    2009-09-01

    To investigate workplace cultures in the acquisition of computer usage skills by mature age workers. Data were gathered through focus groups conducted at job network centres in the Greater Brisbane metropolitan region. Participants who took part were a mixture of workers and job-seekers. The results suggest that mature age workers can be exposed to inappropriate computer training practices and age-insensitive attitudes towards those with low base computer skills. There is a need for managers to be observant of ageist attitudes in the work place and to develop age-sensitive strategies to help mature age workers learn computer usage skills. Mature age workers also need to develop skills in ways which are practical and meaningful to their work.

  20. Optical fiber network of the data acquisition sub system of SIIP Integral Information System of Process, Unit 2

    International Nuclear Information System (INIS)

    Moreno R, J.; Ramirez C, M.J.; Pina O, I.; Cortazar F, S.; Villavicencio R, A.

    1995-01-01

    In this article, a description of the communication network, based in optical fiber, which interlace the data acquisition equipment with the computers of Laguna Verde Nuclear Power Plant of SIIP is made. It is also presented a description of the equipment and accessories which conform the network. The requirements imposed by the Central which stated the selection of optical fiber as interlace mean are also outstanding. SIIP is a computerized, centralized and integrated system which make information functions by means of the acquisition of signals and the required computational process for the continuous evaluation of the nuclear power plant in normal and emergency conditions. Is an exclusive monitoring system with no one action on the generation process; that is to say, it only acquire, process, store information and assist to the personnel in the operational analysis of the nuclear plant. SIIP is a Joint Project with three participant institutions: Federal Electricity Commission/ Electrical Research Institute/ General Electric. (Author)

  1. A data acquisition system based on a personal computer

    International Nuclear Information System (INIS)

    Omata, K.; Fujita, Y.; Yoshikawa, N.; Sekiguchi, M.; Shida, Y.

    1991-07-01

    A versatile and flexible data acquisition system KODAQ (Kakuken Online Data AcQuisition system) has been developed. The system runs with CAMAC and a most popular Japanese personal computer, PC9801 (NEC), similar to the IBM PC/AT. The system is designed to set up easily a data acquisition system for various kinds of nuclear-physics experiments. (author)

  2. Accelerator optimization using a network control and acquisition system

    International Nuclear Information System (INIS)

    Geddes, Cameron G.R.; Catravas, P.E.; Faure, Jerome; Toth, Csaba; Tilborg, J. van; Leemans, Wim P.

    2002-01-01

    Accelerator optimization requires detailed study of many parameters, indicating the need for remote control and automated data acquisition systems. A control and data acquisition system based on a network of commodity PCs and applications with standards based inter-application communication is being built for the l'OASIS accelerator facility. This system allows synchronous acquisition of data at high (> 1 Hz) rates and remote control of the accelerator at low cost, allowing detailed study of the acceleration process

  3. Southern California Seismic Network: New Design and Implementation of Redundant and Reliable Real-time Data Acquisition Systems

    Science.gov (United States)

    Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.

    2005-12-01

    The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data

  4. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  5. Computer-communication networks

    CERN Document Server

    Meditch, James S

    1983-01-01

    Computer- Communication Networks presents a collection of articles the focus of which is on the field of modeling, analysis, design, and performance optimization. It discusses the problem of modeling the performance of local area networks under file transfer. It addresses the design of multi-hop, mobile-user radio networks. Some of the topics covered in the book are the distributed packet switching queuing network design, some investigations on communication switching techniques in computer networks and the minimum hop flow assignment and routing subject to an average message delay constraint

  6. COMPUTER-AIDED ACQUISITION OF WRITING SKILLS

    NARCIS (Netherlands)

    Verhoef, R.; Tomic, W.

    2008-01-01

    This article presents the results of a review of the literature questioning whether and to what extent computers can be used as a means of instruction for the guided acquisition of communicative writing skills in higher education. To answer this question, the present paper first explores the

  7. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  8. Computer-based supervisory control and data acquisition system for the radioactive waste evaporator

    International Nuclear Information System (INIS)

    Pope, N.G.; Schreiber, S.B.; Yarbro, S.L.; Gomez, B.G.; Nekimken, H.L.; Sanchez, D.E.; Bibeau, R.A.; Macdonald, J.M.

    1994-12-01

    The evaporator process at TA-55 reduces the amount of transuranic liquid radioactive waste by separating radioactive salts from relatively low-level radioactive nitric acid solution. A computer-based supervisory control and data acquisition (SCADA) system has been installed on the process that allows the operators to easily interface with process equipment. Individual single-loop controllers in the SCADA system allow more precise process operation with less human intervention. With this system, process data can be archieved in computer files for later analysis. Data are distributed throughout the TA-55 site through a local area network so that real-time process conditions can be monitored at multiple locations. The entire system has been built using commercially available hardware and software components

  9. Data acquisition in modeling using neural networks and decision trees

    Directory of Open Access Journals (Sweden)

    R. Sika

    2011-04-01

    Full Text Available The paper presents a comparison of selected models from area of artificial neural networks and decision trees in relation with actualconditions of foundry processes. The work contains short descriptions of used algorithms, their destination and method of data preparation,which is a domain of work of Data Mining systems. First part concerns data acquisition realized in selected iron foundry, indicating problems to solve in aspect of casting process modeling. Second part is a comparison of selected algorithms: a decision tree and artificial neural network, that is CART (Classification And Regression Trees and BP (Backpropagation in MLP (Multilayer Perceptron networks algorithms.Aim of the paper is to show an aspect of selecting data for modeling, cleaning it and reducing, for example due to too strong correlationbetween some of recorded process parameters. Also, it has been shown what results can be obtained using two different approaches:first when modeling using available commercial software, for example Statistica, second when modeling step by step using Excel spreadsheetbasing on the same algorithm, like BP-MLP. Discrepancy of results obtained from these two approaches originates from a priorimade assumptions. Mentioned earlier Statistica universal software package, when used without awareness of relations of technologicalparameters, i.e. without user having experience in foundry and without scheduling ranks of particular parameters basing on acquisition, can not give credible basis to predict the quality of the castings. Also, a decisive influence of data acquisition method has been clearly indicated, the acquisition should be conducted according to repetitive measurement and control procedures. This paper is based on about 250 records of actual data, for one assortment for 6 month period, where only 12 data sets were complete (including two that were used for validation of neural network and useful for creating a model. It is definitely too

  10. A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project

    Science.gov (United States)

    Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello

    This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.

  11. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  12. Classroom Computer Network.

    Science.gov (United States)

    Lent, John

    1984-01-01

    This article describes a computer network system that connects several microcomputers to a single disk drive and one copy of software. Many schools are switching to networks as a cheaper and more efficient means of computer instruction. Teachers may be faced with copywriting problems when reproducing programs. (DF)

  13. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  14. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  15. A Lossless Switch for Data Acquisition Networks

    CERN Document Server

    Jereczek, Grzegorz Edmund; The ATLAS collaboration

    2015-01-01

    The recent trends in software-defined networking (SDN) and network function virtualization (NFV) are boosting the advance of software-based packet processing and forwarding on commodity servers. Although performance has traditionally been the challenge of this approach, this situation changes with modern server platforms. High performance load balancers, proxies, virtual switches and other network functions can be now implemented in software and not limited to specialized commercial hardware, thus reducing cost and increasing the flexibility. In this paper we design a lossless software-based switch for high bandwidth data acquisition (DAQ) networks, using the ATLAS experiment at CERN as a case study. We prove that it can effectively solve the incast pathology arising from the many-to-one communication pattern present in DAQ networks by providing extremely high buffering capabilities. We evaluate this on a commodity server equipped with twelve 10 Gbps Ethernet interfaces providing a total bandwidth of 120 Gbps...

  16. Computer networks and advanced communications

    International Nuclear Information System (INIS)

    Koederitz, W.L.; Macon, B.S.

    1992-01-01

    One of the major methods for getting the most productivity and benefits from computer usage is networking. However, for those who are contemplating a change from stand-alone computers to a network system, the investigation of actual networks in use presents a paradox: network systems can be highly productive and beneficial; at the same time, these networks can create many complex, frustrating problems. The issue becomes a question of whether the benefits of networking are worth the extra effort and cost. In response to this issue, the authors review in this paper the implementation and management of an actual network in the LSU Petroleum Engineering Department. The network, which has been in operation for four years, is large and diverse (50 computers, 2 sites, PC's, UNIX RISC workstations, etc.). The benefits, costs, and method of operation of this network will be described, and an effort will be made to objectively weigh these elements from the point of view of the average computer user

  17. COMPUTER-AIDED DATA ACQUISITION FOR COMBUSTION EXPERIMENTS

    Science.gov (United States)

    The article describes the use of computer-aided data acquisition techniques to aid the research program of the Combustion Research Branch (CRB) of the U.S. EPA's Air and Energy Engineering Research Laboratory (AEERL) in Research Triangle Park, NC, in particular on CRB's bench-sca...

  18. Acquisition of gamma camera and physiological data by computer

    International Nuclear Information System (INIS)

    Hack, S.N.; Chang, M.; Line, B.R.; Cooper, J.A.; Robeson, G.H.

    1986-01-01

    We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable

  19. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  20. Triggering and data acquisition general considerations

    International Nuclear Information System (INIS)

    Butler, Joel N.

    2003-01-01

    We provide a general introduction to trigger and data acquisition systems in High Energy Physics. We emphasize the new possibilities and new approaches that have been made possible by developments in computer technology and networking

  1. Network Restoration for Next-Generation Communication and Computing Networks

    Directory of Open Access Journals (Sweden)

    B. S. Awoyemi

    2018-01-01

    Full Text Available Network failures are undesirable but inevitable occurrences for most modern communication and computing networks. A good network design must be robust enough to handle sudden failures, maintain traffic flow, and restore failed parts of the network within a permissible time frame, at the lowest cost achievable and with as little extra complexity in the network as possible. Emerging next-generation (xG communication and computing networks such as fifth-generation networks, software-defined networks, and internet-of-things networks have promises of fast speeds, impressive data rates, and remarkable reliability. To achieve these promises, these complex and dynamic xG networks must be built with low failure possibilities, high network restoration capacity, and quick failure recovery capabilities. Hence, improved network restoration models have to be developed and incorporated in their design. In this paper, a comprehensive study on network restoration mechanisms that are being developed for addressing network failures in current and emerging xG networks is carried out. Open-ended problems are identified, while invaluable ideas for better adaptation of network restoration to evolving xG communication and computing paradigms are discussed.

  2. An SDN based approach for the ATLAS data acquisition network

    CERN Document Server

    Blikra, Espen; The ATLAS collaboration

    2016-01-01

    ATLAS is a high energy physics experiment in the Large Hadron Collider located at CERN. During the so called Long Shutdown 2 period scheduled for late 2019, ATLAS will undergo several modifications and upgrades on its data acquisition system in order to cope with the higher luminosity requirements. As part of these activities, a new read-out chain will be built for the New Small Wheel muon detector and the one of the Liquid Argon calorimeter will be upgraded. The subdetector specific electronic boards will be replaced with new commodity-server-based systems and instead of the custom serial-link-based communication, the new system will make use of a yet to be chosen commercial network technology. The new network will be used as a data acquisition network and at the same time it is intended to allow communication for the control, calibration and monitoring of the subdetectors. Therefore several types of traffic with different bandwidth requirements and different criticality will be competing for the same underl...

  3. Learning theories in computer-assisted foreign language acquisition

    OpenAIRE

    Baeva, D.

    2013-01-01

    This paper reviews the learning theories, focusing to the strong interest in technology use for language learning. It is important to look at how technology has been used in the field thus far. The goals of this review are to understand how computers have been used in the past years to support foreign language learning, and to explore any research evidence with regards to how computer technology can enhance language skills acquisition

  4. Network on chip master control board for neutron acquisition

    International Nuclear Information System (INIS)

    Ruiz-Martinez, E.; Mary, T.; Mutti, P.; Ratel, J.; Rey, F.

    2012-01-01

    The acquisition master control board is designed to assemble the various acquisition modes in use at the Institut Laue-Langevin (ILL). The main goal is to make the card common for all the ILL's instruments in a simple, modular and open way, giving the possibility to add new functionalities in order to follow the evolving demand. It has been necessary to define a central element to provide synchronization to the rest of the units. The backbone of the proposed acquisition control system is the denominated master acquisition board. The master board consists on a VME64X configurable high density I/O connection carrier board based on the latest Xilinx Virtex-6T FPGA. The internal architecture of the FPGA is designed as a Network on Chip (NoC) approach. The complete system also includes a display board and n histogram modules for live display of the data from the detectors. (authors)

  5. Computing in an academic radiation therapy department

    International Nuclear Information System (INIS)

    Gottlieb, C.F.; Houdek, P.V.; Fayos, J.V.

    1985-01-01

    The authors conceptualized the different computer functions in radiotherapy as follows: 1) treatment planning and dosimetry, 2) data and word processing, 3) radiotherapy information system (data bank), 4) statistical analysis, 5) data acquisition and equipment control, 6) telecommunication, and 7) financial management. They successfully implemented the concept of distributed computing using multiple mini and personal computers. The authors' computer practice supports data and word processing, graphics, communication, automated data acquisition and control, and portable computing. The computers are linked together into a local computer network which permits sharing of information, peripherals, and unique programs among our systems, while preserving the individual function and identity of each machine. Furthermore, the architecture of our network allows direct access to any other computer network providing them with inexpensive use of the most modern and sophisticated software and hardware resources

  6. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  7. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  8. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  9. Basics of Computer Networking

    CERN Document Server

    Robertazzi, Thomas

    2012-01-01

    Springer Brief Basics of Computer Networking provides a non-mathematical introduction to the world of networks. This book covers both technology for wired and wireless networks. Coverage includes transmission media, local area networks, wide area networks, and network security. Written in a very accessible style for the interested layman by the author of a widely used textbook with many years of experience explaining concepts to the beginner.

  10. Real-time multiple networked viewer capability of the DIII-D EC data acquisition system

    International Nuclear Information System (INIS)

    Ponce, D.; Gorelov, I.A.; Chiu, H.K.; Baity, F.W.

    2005-01-01

    A data acquisition system (DAS) which permits real-time viewing by multiple locally networked operators is being implemented for the electron cyclotron (EC) heating and current drive system at DIII-D. The DAS is expected to demonstrate performance equivalent to standalone oscilloscopes. Participation by remote viewers, including throughout the greater DIII-D facility, can also be incorporated. The real-time system uses one computer-controlled DAS per gyrotron. The DAS computers send their data to a central data server using individual and dedicated 200 Mbps fully duplexed Ethernet connections. The server has a dedicated 10 krpm hard drive for each gyrotron DAS. Selected channels can then be reprocessed and distributed to viewers over a standard local area network (LAN). They can also be bridged from the LAN to the internet. Calculations indicate that the hardware will support real-time writing of each channel at full resolution to the server hard drives. The data will be re-sampled for distribution to multiple viewers over the LAN in real-time. The hardware for this system is in place. The software is under development. This paper will present the design details and up-to-date performance metrics of the system

  11. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  12. 1990 CERN School of Computing

    International Nuclear Information System (INIS)

    1991-01-01

    These Proceedings contain written versions of lectures delivered at the 1990 CERN School of Computing, covering a variety of topics. Computer networks are treated in three papers: standards in computer networking; evolution of local and metropolitan area networks; asynchronous transfer mode, the solution for broadband ISDN. Data acquisition and analysis are the topic of papers on: data acquisition using MODEL software; graphical event analysis. Two papers in the field of signal processing treat digital image processing and the use of digital signal processors in HEP. Another paper reviews the present state of digital optical computing. Operating systems and programming discipline are covered in two papers: UNIX, evolution towards distributed systems; new developments in program verification. Three papers treat miscellaneous topics: computer security within the CERN environment; numerical simulation in fluid mechanics; fractals. An introduction to transputers and Occam gives an account of the tutorial lectures given at the School. (orig.)

  13. High speed network for real-time data acquisition and control with lossless and balanced self-routing

    International Nuclear Information System (INIS)

    Ofek, Y.; Hicks, S.E.

    1990-01-01

    The authors present a gigabit network architecture for interconnecting the on-line and off-line data acquisition and processing farms for future detector facilities at the SSC (Superconducting Super Collider). In the solution they concentrate on how to interconnect the front-end (ADCs and leve 1 and 2 triggers) with the on-line (real-time) computer farm. They propose that the on-line solution be extended into the off-line data processing and mass-storage. As a result, this solution provides a single uniform structure which will simplify the overall system

  14. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  15. Autonomous acquisition systems for TJ-II: controlling instrumentation with a fourth generation language

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.; Agudo, J.M.; McCarthy, K.J.; Ruiz, M.; Barrera, E.; Lopez, S.

    2004-01-01

    Recently, 536 new acquisition channels, made-up of three different channel types, have been incorporated into the TJ-II data acquisition system (DAQ). Dedicated software has also been developed to permit experimentalists to program and control the data acquisition in these systems. The software has been developed using LabView and runs under the Windows 2000 operating system in both personal computer (PC) and PXI controllers. In addition, LabView software has been developed to control TJ-II VXI channels from a PC using a MXI connection. This new software environment will also aid future integration of acquisition channels into the TJ-II remote participation system. All of these acquisition devices work autonomously and are connected to the TJ-II central server via a local area network. In addition, they can be remotely controlled from the TJ-II control-room using Virtual Network Computing (VNC) software

  16. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    International Nuclear Information System (INIS)

    VanderLaan, J.F.; Cummings, J.W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPUs. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT ampersand T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates

  17. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    Science.gov (United States)

    Vanderlaan, J. F.; Cummings, J. W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.

  18. Computer Networking Laboratory for Undergraduate Computer Technology Program

    National Research Council Canada - National Science Library

    Naghedolfeizi, Masoud

    2000-01-01

    ...) To improve the quality of education in the existing courses related to computer networks and data communications as well as other computer science courses such programming languages and computer...

  19. Location-aware network operation for cloud radio access network

    KAUST Repository

    Wang, Fanggang

    2017-06-20

    One of the major challenges in effectively operating a cloud radio access network (C-RAN) is the excessive overhead signaling and computation load that scale rapidly with the size of the network. In this paper, the exploitation of location information of the mobile devices is proposed to address this challenge. We consider an approach in which location-assisted channel state information (CSI) acquisition methods are introduced to complement conventional pilot-based CSI acquisition methods and avoid excessive overhead signaling. A low-complexity algorithm is designed to maximize the sum rate. An adaptive algorithm is also proposed to address the uncertainty issue in CSI acquisition. Both theoretical and numerical analyses show that location information provides a new dimension to improve throughput for next-generation massive cooperative networks.

  20. New Communication Network Protocol for a Data Acquisition System

    Science.gov (United States)

    Uchida, T.; Fujii, H.; Nagasaka, Y.; Tanaka, M.

    2006-02-01

    An event builder based on communication networks has been used in high-energy physics experiments, and various networks have been adopted, for example, IEEE 802.3 (Ethernet), asynchronous transfer mode (ATM), and so on. In particular, Ethernet is widely used because its infrastructure is very cost effective. Many systems adopt standard protocols that are designed for a general network. However, in the case of an event builder, the communication pattern between stations is different from that in a general network. The unique communication pattern causes congestion, and thus makes it difficulty to quantitatively design the network. To solve this problem, we have developed a simple network protocol for a data acquisition (DAQ) system. The protocol is designed to keep the sequence of senders so that no congestion occurs. We implemented the protocol on a small hardware component [a field programmable gate array (FPGA)] and measured the performance, so that it will be ready for a generic DAQ system

  1. Data Management Standards in Computer-aided Acquisition and Logistic Support (CALS)

    Science.gov (United States)

    Jefferson, David K.

    1990-01-01

    Viewgraphs and discussion on data management standards in computer-aided acquisition and logistic support (CALS) are presented. CALS is intended to reduce cost, increase quality, and improve timeliness of weapon system acquisition and support by greatly improving the flow of technical information. The phase 2 standards, industrial environment, are discussed. The information resource dictionary system (IRDS) is described.

  2. Pragmatic Bootstrapping: A Neural Network Model of Vocabulary Acquisition

    Science.gov (United States)

    Caza, Gregory A.; Knott, Alistair

    2012-01-01

    The social-pragmatic theory of language acquisition proposes that children only become efficient at learning the meanings of words once they acquire the ability to understand the intentions of other agents, in particular the intention to communicate (Akhtar & Tomasello, 2000). In this paper we present a neural network model of word learning which…

  3. Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking

    International Nuclear Information System (INIS)

    Bemis, C.; Erskine, J.; Franey, M.; Greiner, D.; Hoehn, M.; Kaletka, M.; LeVine, M.; Roberson, R.; Welch, L.

    1990-05-01

    This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office

  4. Computed tomography: acquisition process, technology and current state

    Directory of Open Access Journals (Sweden)

    Óscar Javier Espitia Mendoza

    2016-02-01

    Full Text Available Computed tomography is a noninvasive scan technique widely applied in areas such as medicine, industry, and geology. This technique allows the three-dimensional reconstruction of the internal structure of an object which is lighted with an X-rays source. The reconstruction is formed with two-dimensional cross-sectional images of the object. Each cross-sectional is obtained from measurements of physical phenomena, such as attenuation, dispersion, and diffraction of X-rays, as result of their interaction with the object. In general, measurements acquisition is performed with methods based on any of these phenomena and according to various architectures classified in generations. Furthermore, in response to the need to simulate acquisition systems for CT, software dedicated to this task has been developed. The objective of this research is to determine the current state of CT techniques, for this, a review of methods, different architectures used for the acquisition and some of its applications is presented. Additionally, results of simulations are presented. The main contributions of this work are the detailed description of acquisition methods and the presentation of the possible trends of the technique.

  5. Computer Network Security- The Challenges of Securing a Computer Network

    Science.gov (United States)

    Scotti, Vincent, Jr.

    2011-01-01

    This article is intended to give the reader an overall perspective on what it takes to design, implement, enforce and secure a computer network in the federal and corporate world to insure the confidentiality, integrity and availability of information. While we will be giving you an overview of network design and security, this article will concentrate on the technology and human factors of securing a network and the challenges faced by those doing so. It will cover the large number of policies and the limits of technology and physical efforts to enforce such policies.

  6. Extended data acquisition support at GSI

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  7. Computer Networks as a New Data Base.

    Science.gov (United States)

    Beals, Diane E.

    1992-01-01

    Discusses the use of communication on computer networks as a data source for psychological, social, and linguistic research. Differences between computer-mediated communication and face-to-face communication are described, the Beginning Teacher Computer Network is discussed, and examples of network conversations are appended. (28 references) (LRW)

  8. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  9. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  10. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  11. Tracker Readout ASIC for Proton Computed Tomography Data Acquisition.

    Science.gov (United States)

    Johnson, Robert P; Dewitt, Joel; Holcomb, Cole; Macafee, Scott; Sadrozinski, Hartmut F-W; Steinberg, David

    2013-10-01

    A unique CMOS chip has been designed to serve as the front-end of the tracking detector data acquisition system of a pre-clinical prototype scanner for proton computed tomography (pCT). The scanner is to be capable of measuring one to two million proton tracks per second, so the chip must be able to digitize the data and send it out rapidly while keeping the front-end amplifiers active at all times. One chip handles 64 consecutive channels, including logic for control, calibration, triggering, buffering, and zero suppression. It outputs a formatted cluster list for each trigger, and a set of field programmable gate arrays merges those lists from many chips to build the events to be sent to the data acquisition computer. The chip design has been fabricated, and subsequent tests have demonstrated that it meets all of its performance requirements, including excellent low-noise performance.

  12. Acquisition of ICU data: concepts and demands.

    Science.gov (United States)

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D J

    2009-07-21

    This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3

  14. The Advantages and Disadvantages of Computer Technology in Second Language Acquisition

    Science.gov (United States)

    Lai, Cheng-Chieh; Kritsonis, William Allan

    2006-01-01

    The purpose of this article is to discuss the advantages and disadvantages of computer technology and Computer Assisted Language Learning (CALL) programs for current second language learning. According to the National Clearinghouse for English Language Acquisition & Language Instruction Educational Programs' report (2002), more than nine million…

  15. Characteristics of the TRISTAN control computer network

    International Nuclear Information System (INIS)

    Kurokawa, Shinichi; Akiyama, Atsuyoshi; Katoh, Tadahiko; Kikutani, Eiji; Koiso, Haruyo; Oide, Katsunobu; Shinomoto, Manabu; Kurihara, Michio; Abe, Kenichi

    1986-01-01

    Twenty-four minicomputers forming an N-to-N token-ring network control the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on NODAL, a multicomputer interpretive language developed at the CERN SPS. The high-level services offered to the users of the network are remote execution by the EXEC, EXEC-P and IMEX commands of NODAL and uniform file access throughout the system. The network software was designed to achieve the fast response of the EXEC command. The performance of the network is also reported. Tasks that overload the minicomputers are processed on the KEK central computers. One minicomputer in the network serves as a gateway to KEKNET, which connects the minicomputer network and the central computers. The communication with the central computers is managed within the framework of the KEK NODAL system. NODAL programs communicate with the central computers calling NODAL functions; functions for exchanging data between a data set on the central computers and a NODAL variable, submitting a batch job to the central computers, checking the status of the submitted job, etc. are prepared. (orig.)

  16. An integrated, multi-vendor distributed data acquisition system

    International Nuclear Information System (INIS)

    Butner, D.N.; Drlik, M.; Meyer, W.H.; Moller, J.M.; Preckshot, G.G.

    1988-01-01

    A distributed data acquisition system that uses various computer hardware and software is being developed to support magnetic fusion experiments at Lawrence Livermore National Laboratory (LLNL). The experimental sequence of operations is controlled by a supervisory program, which coordinates software running on Digital Equipment Corporation (DEC) VAX computers, Hewlett-Packard (HP) UNIX-based workstations, and HP BASIC desktop computers. An interprocess communication system (IPCS) allows programs to communicate with one another in a standard manner regardless of program location in the network or of operating system and hardware differences. We discuss the design and implementation of this data acquisition system with particular emphasis on the coordination model and the IPCS. 5 refs., 3 figs

  17. A network approach to large-scale experimental data acquisition and analysis

    International Nuclear Information System (INIS)

    Corbould, M.A.; How, J.A.

    1984-01-01

    The Plasma Research Laboatory at the Australian National University has developed a sophisticated and flexible data acquisition system for its LT-4 tokamak that requires very little support to construct, maintain and expand. It is novel in that is based on a minicomputer network and is built almost entirely of commercially available products. Benchmarks show that the network system outperforms conventional stand-alone and time-shared data systems

  18. Spontaneous ad hoc mobile cloud computing network.

    Science.gov (United States)

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  19. Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Fermi Research Alliance; Northern Illinois University

    2015-01-01

    Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sends the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.

  20. Microcomputer data acquisition and control.

    Science.gov (United States)

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  1. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  2. Considerations on command and response language features for a network of heterogeneous autonomous computers

    Science.gov (United States)

    Engelberg, N.; Shaw, C., III

    1984-01-01

    The design of a uniform command language to be used in a local area network of heterogeneous, autonomous nodes is considered. After examining the major characteristics of such a network, and after considering the profile of a scientist using the computers on the net as an investigative aid, a set of reasonable requirements for the command language are derived. Taking into account the possible inefficiencies in implementing a guest-layered network operating system and command language on a heterogeneous net, the authors examine command language naming, process/procedure invocation, parameter acquisition, help and response facilities, and other features found in single-node command languages, and conclude that some features may extend simply to the network case, others extend after some restrictions are imposed, and still others require modifications. In addition, it is noted that some requirements considered reasonable (user accounting reports, for example) demand further study before they can be efficiently implemented on a network of the sort described.

  3. Offline computing and networking

    International Nuclear Information System (INIS)

    Appel, J.A.; Avery, P.; Chartrand, G.

    1985-01-01

    This note summarizes the work of the Offline Computing and Networking Group. The report is divided into two sections; the first deals with the computing and networking requirements and the second with the proposed way to satisfy those requirements. In considering the requirements, we have considered two types of computing problems. The first is CPU-intensive activity such as production data analysis (reducing raw data to DST), production Monte Carlo, or engineering calculations. The second is physicist-intensive computing such as program development, hardware design, physics analysis, and detector studies. For both types of computing, we examine a variety of issues. These included a set of quantitative questions: how much CPU power (for turn-around and for through-put), how much memory, mass-storage, bandwidth, and so on. There are also very important qualitative issues: what features must be provided by the operating system, what tools are needed for program design, code management, database management, and for graphics

  4. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  5. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  6. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  7. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  8. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  9. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  10. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  11. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  12. The acknowledge project: toward improved efficiency in the knowledge acquisition process

    International Nuclear Information System (INIS)

    Marty, J.C.; Ramparany, F.

    1990-01-01

    This paper presents a general overview of the ACKnowledge Project (Acquisition of Knowledge). Knowledge Acquisition is a critical and time-consuming phase in the development of expert systems. The ACKnowledge project aims at improving the efficiency of knowledge acquisition by analyzing and evaluating knowledge acquisition techniques, and developing a Knowledge Engineering Workbench that supports the Knowledge Engineer from the early stage of knowledge aquisition up to the implementation of the knowledge base in large and complex application domains such as the diagnosis of dynamic computer networks

  13. Real time data acquisition of a countrywide commercial microwave link network

    Science.gov (United States)

    Chwala, Christian; Keis, Felix; Kunstmann, Harald

    2015-04-01

    Research in recent years has shown that data from commercial microwave link networks can provide very valuable precipitation information. Since these networks comprise the backbone of the cell phone network, they provide countrywide coverage. However acquiring the necessary data from the network operators is still difficult. Data is usually made available for researchers with a large time delay and often at irregular basis. This of course hinders the exploitation of commercial microwave link data in operational applications like QPE forecasts running at national meteorological services. To overcome this, we have developed a custom software in joint cooperation with our industry partner Ericsson. The software is installed on a dedicated server at Ericsson and is capable of acquiring data from the countrywide microwave link network in Germany. In its current first operational testing phase, data from several hundred microwave links in southern Germany is recorded. All data is instantaneously sent to our server where it is stored and organized in an emerging database. Time resolution for the Ericsson data is one minute. The custom acquisition software, however, is capable of processing higher sampling rates. Additionally we acquire and manage 1 Hz data from four microwave links operated by the skiing resort in Garmisch-Partenkirchen. We will present the concept of the data acquisition and show details of the custom-built software. Additionally we will showcase the accessibility and basic processing of real time microwave link data via our database web frontend.

  14. The RING and Seismic Network: Data Acquisition of Co-located Stations

    Science.gov (United States)

    Falco, L.; Avallone, A.; Cattaneo, M.; Cecere, G.; Cogliano, R.; D'Agostino, N.; D'Ambrosio, C.; D'Anastasio, E.; Selvaggi, G.

    2007-12-01

    The plate boundary between Africa and Eurasia represents an interesting geodynamical region characterized by a complex pattern of deformation. First-order scientific problems regarding the existence of rigid blocks within the plate boundary, the present-day activity of the Calabrian subduction zone and the modes of release of seismic deformation are still awaiting for a better understanding. To address these issues, the INGV (Istituto Nazionale Geofisica e Vulcanlogia) deployed a permanent, integrated and real-time monitoring GPS network (RING) all over Italy. RING is now constituted by about 120 stations. The CGPS sites, acquiring at 1Hz and 30s sampling rate, are integrated either with broad band or very broad band seismometers and accelerometers for an improved definition of the seismically active regions. Most of the sites are connected to the acquisition centre (located in Rome and duplicated in Grottaminarda) through a satellite system (VSAT), while the remaining sites transmit data by Internet and classical phone connections. The satellite data transmission and the integration with seismic instruments makes this network one of the most innovative CGPS networks in Europe. The heterogeneity of the installed instrumentation, the transmission types and the increasing number of stations needed a central monitoring and acquisition system. A central acquisition system has been developed in Grottaminarda in southern Italy. Regarding the seismic monitoring we chose to use the open source system Earthworm, developed by USGS, with which we store waveforms and implement automatic localization of the seismic events occurring in the area. As most of the GPS sites are acquired by means of Nanometrics satellite technology, we developed a specific software (GpsView), written in Java, to monitor the state of health of those CGPS. This software receives GPS data from NaqsServer (Nanometrics acquisition system) and outputs information about the sites (i.e. approx position

  15. A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks

    Directory of Open Access Journals (Sweden)

    Cemal Melih Tanis

    2018-06-01

    Full Text Available A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT, which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP. Processing features include GUI based selection of the region of interest (ROI, automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF, red fraction index (RF, blue fraction index (BF, green-red vegetation index (GRVI, and green excess (GEI index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland.

  16. Hard real-time quick EXAFS data acquisition with all open source software on a commodity personal computer

    International Nuclear Information System (INIS)

    So, I.; Siddons, D.P.; Caliebe, W.A.; Khalid, S.

    2007-01-01

    We describe here the data acquisition subsystem of the Quick EXAFS (QEXAFS) experiment at the National Synchrotron Light Source of Brookhaven National Laboratory. For ease of future growth and flexibility, almost all software components are open source with very active maintainers. Among them, Linux running on x86 desktop computer, RTAI for real-time response, COMEDI driver for the data acquisition hardware, Qt and PyQt for graphical user interface, PyQwt for plotting, and Python for scripting. The signal (A/D) and energy-reading (IK220 encoder) devices in the PCI computer are also EPICS enabled. The control system scans the monochromator energy through a networked EPICS motor. With the real-time kernel, the system is capable of deterministic data-sampling period of tens of micro-seconds with typical timing-jitter of several micro-seconds. At the same time, Linux is running in other non-real-time processes handling the user-interface. A modern Qt-based controls-frontend enhances productivity. The fast plotting and zooming of data in time or energy coordinates let the experimenters verify the quality of the data before detailed analysis. Python scripting is built-in for automation. The typical data-rate for continuous runs are around 10 M bytes/min

  17. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  18. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    Science.gov (United States)

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  20. Understanding and designing computer networks

    CERN Document Server

    King, Graham

    1995-01-01

    Understanding and Designing Computer Networks considers the ubiquitous nature of data networks, with particular reference to internetworking and the efficient management of all aspects of networked integrated data systems. In addition it looks at the next phase of networking developments; efficiency and security are covered in the sections dealing with data compression and data encryption; and future examples of network operations, such as network parallelism, are introduced.A comprehensive case study is used throughout the text to apply and illustrate new techniques and concepts as th

  1. Using satellite communications for a mobile computer network

    Science.gov (United States)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  2. An efficient and cost effective nuclear medicine image network

    International Nuclear Information System (INIS)

    Sampathkumaran, K.S.; Miller, T.R.

    1987-01-01

    An image network that is in use in a large nuclear medicine department is described. This network was designed to efficiently handle a large volume of clinical data at reasonable cost. Small, limited function computers are attached to each scintillation camera for data acquisition. The images are transferred by cable network or floppy disc to a large, powerful central computer for processing and display. Cost is minimized by use of small acquisition computers not equipped with expensive video display systems or elaborate analysis software. Thus, financial expenditure can be concentrated in a powerful central computer providing a centralized data base, rapid processing, and an efficient environment for program development. Clinical work is greatly facilitated because the physicians can process and display all studies without leaving the main reading area. (orig.)

  3. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  4. Computer network for electric power control systems. Chubu denryoku (kabu) denryoku keito seigyoyo computer network

    Energy Technology Data Exchange (ETDEWEB)

    Tsuneizumi, T. (Chubu Electric Power Co. Inc., Nagoya (Japan)); Shimomura, S.; Miyamura, N. (Fuji Electric Co. Ltd., Tokyo (Japan))

    1992-06-03

    A computer network for electric power control system was developed that is applied with the open systems interconnection (OSI), an international standard for communications protocol. In structuring the OSI network, a direct session layer was accessed from the operation functions when high-speed small-capacity information is transmitted. File transfer, access and control having a function of collectively transferring large-capacity data were applied when low-speed large-capacity information is transmitted. A verification test for the realtime computer network (RCN) mounting regulation was conducted according to a verification model using a mini-computer, and a result that can satisfy practical performance was obtained. For application interface, kernel, health check and two-route transmission functions were provided as a connection control function, so were transmission verification function and late arrival abolishing function. In system mounting pattern, dualized communication server (CS) structure was adopted. A hardware structure may include a system to have the CS function contained in a host computer and a separate installation system. 5 figs., 6 tabs.

  5. Acquisition and manipulation of computed tomography images of the maxillofacial region for biomedical prototyping

    International Nuclear Information System (INIS)

    Meurer, Maria Ines; Silva, Jorge Vicente Lopes da; Santa Barbara, Ailton; Nobre, Luiz Felipe; Oliveira, Marilia Gerhardt de; Silva, Daniela Nascimento

    2008-01-01

    Biomedical prototyping has resulted from a merger of rapid prototyping and imaging diagnosis technologies. However, this process is complex, considering the necessity of interaction between biomedical sciences and engineering. Good results are highly dependent on the acquisition of computed tomography images and their subsequent manipulation by means of specific software. The present study describes the experience of a multidisciplinary group of researchers in the acquisition and manipulation of computed tomography images of the maxillofacial region aiming at biomedical prototyping for surgical purposes. (author)

  6. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  7. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han; Yang, Yong Liang; Bao, Fan; Fink, Daniel; Yan, Dongming; Wonka, Peter; Mitra, Niloy J.

    2016-01-01

    of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications

  8. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  9. Autonomic computing enabled cooperative networked design

    CERN Document Server

    Wodczak, Michal

    2014-01-01

    This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

  10. Concept of a computer network architecture for complete automation of nuclear power plants

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ray, A.

    1990-01-01

    The state of the art in automation of nuclear power plants has been largely limited to computerized data acquisition, monitoring, display, and recording of process signals. Complete automation of nuclear power plants, which would include plant operations, control, and management, fault diagnosis, and system reconfiguration with efficient and reliable man/machine interactions, has been projected as a realistic goal. This paper presents the concept of a computer network architecture that would use a high-speed optical data highway to integrate diverse, interacting, and spatially distributed functions that are essential for a fully automated nuclear power plant

  11. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  12. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  13. Computing and Network - Overview

    International Nuclear Information System (INIS)

    Jakubowski, Z.

    1999-01-01

    Full text: The responsibility of the Network Group covers: - providing central services like WWW, DNS (Domain Name Server), mail, etc.; - maintenance and support of the Local Area Networks,; - operation of the Wide Area Networks (LAN); - the support of the central UNIX servers and desktop workstations; - VAX/VMS cluster operation and support. The two-processor HP-UNIX K-200 and 6-processor SGI Challenge XL servers were delivering stable services to our users. Both servers were upgraded during the past year. SGI Challenge received additional 256 MB of memory. It was necessary in order to get all benefits of true 64-bit architecture of the SGI IRIX 6.2. The upgrade of our HP K-200 server were problematic so we decided to buy a new powerful machine and join the old and new machine via the fast network. Besides these main servers we have more than 30 workstations from IBM, DEC, HP, SGI and SUN. We observed a real race in PC technology in the past year. Intel processors deliver currently a performance that is comparable with HP or SUN workstations at very low costs. These CPU power is especially visible under Linux that is free Unix-like operating system. The clusters of cheap PC computers should be seriously considered in planning the computing power for the future experiments. The CPU power was further decentralized-smaller but powerful computers cover growing computing demands of our work-groups creating a small ''local computing centers''. The stable network and the concept of central services plays the essential role in this scenario. Unfortunately the network performance for the international communications is persistently unacceptable. We believe that attempts to join the European Quantum project is the only way to achieve the reasonable international network performance. In these plan polish scientific community will gain 34 Mbps international link. The growing costs of the ''real meetings'' give us no alternative to ''virtual meetings'' via the network in the

  14. The research of computer network security and protection strategy

    Science.gov (United States)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  15. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  16. 1992 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1993-01-01

    These Proceedings contain written accounts of most of the lectures given at the 1992 CERN School of Computing, covering a variety of topics. A number of aspects of parallel and of distributed computing were treated in five lecture series: 'Status of parallel computing', 'An introduction to the APE100 computer', 'Introduction to distributed systems', 'Interprocess communication' and 'SHIFT, heterogeneous workstation services at CERN'. Triggering and data acquisition for future colliders was covered in: 'Neural networks for tripper' and 'Architecture for future data acquisition systems'. Analysis of experiments was treated in two series of lectures; 'Off-line software in HEP: Experience and trends', and 'Is there a future for event display?'. Design techniques were the subject of lectures on: 'Computer-aided design of electronics', CADD, computer-aided detector design' and 'Software design, the methods and the tools'. The other lectures reproduced here treated various fields: 'Second generation expert systems', 'Multidatabase in health care systems', 'Multimedia networks, what is new?' 'Pandora: An experimental distributed multimedia system', 'Benchmarking computers for HEP', 'Experience with some early computers' and 'Turing and ACE; lessons from a 1946 computer design'. (orig.)

  17. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  18. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  19. Clustered data acquisition for the CMS experiment

    International Nuclear Information System (INIS)

    Gutleber, J.; Antchev, G.; Cano, E.; Csilling, A.; Cittolin, S.; Gigi, D.; Gras, P.; Jacobs, C.; Meijers, F.; Meschi, E.; Oh, A.; Orsini, L.; Pollet, L.; Racz, A.; Samyn, D.; Schwick, C.; Zangrando, L.; Erhan, S.; Gulmini, M.; Sphicas, P.; Ninane, A.

    2001-01-01

    Powerful mainstream computing equipment and the advent of affordable multi-Gigabit communication technology allow us to tackle data acquisition problems with clusters of inexpensive computers. Such networks typically incorporate heterogeneous platforms, real-time partitions and custom devices. Therefore, one must strive for a software infrastructure that efficiently combines the nodes to a single, unified resource for the user. Overall requirements for such middleware are high efficiency and configuration flexibility. Intelligent I/O (I 2 O) is an industry specification that defines a uniform messaging format and executing model for processor-enabled communication equipment. Mapping this concept to a distributed computing environment and encapsulating the details of the specification into an application-programming framework allow us to provide run-time support for cluster operation. The authors give a brief overview of a framework, XDAQ that we designed and implemented at CERN for the Compact Muon Solenoid experiment's prototype data acquisition system

  20. 2013 International Conference on Computer Engineering and Network

    CERN Document Server

    Zhu, Tingshao

    2014-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from The Proceedings of the 2013 International Conference on Computer Engineering and Network (CENet2013) which was held on July 20-21, in Shanghai, China.

  1. Acquisition and analysis of throughput rates for an operational department-wide PACS

    Science.gov (United States)

    Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.

    1992-07-01

    The accurate prediction of image throughput is a critical issue in planning for and acquisition of any successful Picture Archiving and Communication System (PACS). Bottlenecks or design flaws can render an expensive PACS implementation useless. This manuscript presents a method for accurately predicting and measuring image throughput of a PACS design. To create the simulation model of the planned or implemented PACS, it must first be decomposed into principal tasks. We have decomposed the entire PACS image management chain into eight subsystems. These subsystems include network transfers over three different networks (Ethernet, FDDI and UltraNet) and five software programs and/or queues: (1) transfer of image data from the imaging modality computer to the image acquisition/reformatting computer; (2) reformatting the image data into a standard image format; (3) transferring the image data from the acquisition/reformatting computer to the image archive computer; (4) updating a relational database management system over the network; (5) image processing-- rotation and optimal gray-scale lookup table calculation; (6) request that the image be archived; (7) image transfer from the image archive computer to a designated image display workstation; and (8) update the local database on the image display station, separate the image header from the image data and store the image data on a parallel disk array. Through development of an event logging facility and implementation of a network management package we have acquired throughput data for each subsystem in the PACS chain. In addition, from our PACS relational database management system, we have distilled the traffic generation patterns (temporal, file size and destination) of our imaging modality devices. This data has been input into a simulation modeling package (Block Oriented Network Simulator-- BONeS) to estimate the characteristics of the modeled PACS, e.g., the throughput rates and delay time. This simulation

  2. Hyperswitch Network For Hypercube Computer

    Science.gov (United States)

    Chow, Edward; Madan, Herbert; Peterson, John

    1989-01-01

    Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.

  3. 4th International Conference on Computer Engineering and Networks

    CERN Document Server

    2015-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from the 4th International Conference on Computer Engineering and Networks (CENet2014) held July 19-20, 2014 in Shanghai, China.  ·       Covers emerging topics for computer engineering and networking ·       Discusses how to improve productivity by using the latest advanced technologies ·       Examines innovation in the fields of computer engineering and networking  

  4. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  5. Automatic data acquisition system of environmental radiation monitor with a personal computer

    International Nuclear Information System (INIS)

    Ohkubo, Tohru; Nakamura, Takashi.

    1984-05-01

    The automatic data acquisition system of environmental radiation monitor was developed in a low price by using a PET personal computer. The count pulses from eight monitors settled at four site boundaries were transmitted to a radiation control room by a signal transmission device and analyzed by the computer via 12 channel scaler and PET-CAMAC Interface for graphic display and printing. (author)

  6. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  7. Data Acquisition and Mass Storage

    Science.gov (United States)

    Vande Vyvre, P.

    2004-08-01

    The experiments performed at supercolliders will constitute a new challenge in several disciplines of High Energy Physics and Information Technology. This will definitely be the case for data acquisition and mass storage. The microelectronics, communication, and computing industries are maintaining an exponential increase of the performance of their products. The market of commodity products remains the largest and the most competitive market of technology products. This constitutes a strong incentive to use these commodity products extensively as components to build the data acquisition and computing infrastructures of the future generation of experiments. The present generation of experiments in Europe and in the US already constitutes an important step in this direction. The experience acquired in the design and the construction of the present experiments has to be complemented by a large R&D effort executed with good awareness of industry developments. The future experiments will also be expected to follow major trends of our present world: deliver physics results faster and become more and more visible and accessible. The present evolution of the technologies and the burgeoning of GRID projects indicate that these trends will be made possible. This paper includes a brief overview of the technologies currently used for the different tasks of the experimental data chain: data acquisition, selection, storage, processing, and analysis. The major trends of the computing and networking technologies are then indicated with particular attention paid to their influence on the future experiments. Finally, the vision of future data acquisition and processing systems and their promise for future supercolliders is presented.

  8. Computing chemical organizations in biological networks.

    Science.gov (United States)

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  9. Development of the computer network of IFIN-HH

    International Nuclear Information System (INIS)

    Danet, A.; Mirica, M.; Constantinescu, S.

    1998-01-01

    The general computer network of Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH), as part of RNC (Romanian National Computer Network for scientific research and technological development), offers the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNC is the national project co-ordinated and established by the Ministry of Research and Technology targeted on the following main objectives: - setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - providing a rapid and competitive tool for the exchange information in the framework of R-D community; - using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - providing a support for information, documentation, scientific and technical co-operation. The guiding principle in elaborating the project of general computer network of IFIN-HH was to implement an open system based on OSI standards without technical barriers in communication between different communities using different computing hardware and software. The major objectives achieved in 1997 in the direction of developing the general computer network of IFIN-HH (over 250 computers connected) were: - connecting all the existing and newly installed computer equipment and providing an adequate connectivity; - providing the usual Internet services: e-mail, ftp, telnet, finger, gopher; - providing access to the World Wide Web resources; - providing on-line statistics of IP traffic (input and output) of each node of the domain computer network; - improving the performance of the connection with the central node RNC. (authors)

  10. The ALICE data acquisition system

    CERN Document Server

    Carena, F; Chapeland, S; Chibante Barroso, V; Costa, F; Dénes, E; Divià, R; Fuchs, U; Grigore, A; Kiss, T; Simonetti, G; Soós, C; Telesca, A; Vande Vyvre, P; Von Haller, B

    2014-01-01

    In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with ...

  11. Computer-Aided Prototyping Systems (CAPS) within the software acquisition process: a case study

    OpenAIRE

    Ellis, Mary Kay

    1993-01-01

    Approved for public release; distribution is unlimited This thesis provides a case study which examines the benefits derived from the practice of computer-aided prototyping within the software acquisition process. An experimental prototyping systems currently in research is the Computer Aided Prototyping System (CAPS) managed under the Computer Science department of the Naval Postgraduate School, Monterey, California. This thesis determines the qualitative value which may be realized by ...

  12. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  13. Upgrading of analogue gamma cameras with PC based computer system

    International Nuclear Information System (INIS)

    Fidler, V.; Prepadnik, M.

    2002-01-01

    Full text: Dedicated nuclear medicine computers for acquisition and processing of images from analogue gamma cameras in developing countries are in many cases faulty and technologically obsolete. The aim of the upgrading project of International Atomic Energy Agency (IAEA) was to support the development of the PC based computer system which would cost 5.000 $ in total. Several research institutions from different countries (China, Cuba, India and Slovenia) were financially supported in this development. The basic demands for the system were: one acquisition card an ISA bus, image resolution up to 256x256, SVGA graphics, low count loss at high count rates, standard acquisition and clinical protocols incorporated in PIP (Portable Image Processing), on-line energy and uniformity correction, graphic printing and networking. The most functionally stable acquisition system tested on several international workshops and university clinics was the Slovenian one with a complete set of acquisition and clinical protocols, transfer of scintigraphic data from acquisition card to PC through PORT, count loss less than 1 % at count rate of 120 kc/s, improvement of integral uniformity index by a factor of 3-5 times, reporting, networking and archiving solutions for simple MS network or server oriented network systems (NT server, etc). More than 300 gamma cameras in 52 countries were digitized and put in the routine work. The project of upgrading the analogue gamma cameras yielded a high promotion of nuclear medicine in the developing countries by replacing the old computer systems, improving the technological knowledge of end users on workshops and training courses and lowering the maintenance cost of the departments. (author)

  14. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  15. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  16. D0 experiment: its trigger, data acquisition, and computers

    International Nuclear Information System (INIS)

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  17. Geoscientific Data Acquisition and Management System (systeme d'Acquisition et de Gestion des Donnees, S.A.G.D.) of the Andra Meuse / Haute-Marne research center

    International Nuclear Information System (INIS)

    Tabani, P.; Hermand, G.; Delay, J.; Mangeot, A.

    2010-01-01

    Document available in extended abstract form only. ANDRA is directly responsible for all scientific data acquired in real time and wire-line at the Meuse Haute- Marne underground research laboratory. To fulfil the needs for the acquisition, storage and display of realtime data, Andra decided to develop and install a system called SAGD (Systeme d'Acquisition et de Gestion des Donnees). With this view, a system was designed to: - Determine the acquisition tools and methods so that technical failures would not result in the loss of data or the acquisition of erroneous data, - Store (conservation) in the long-term all data in a single place in a single form, - Allow the diffusion and free access of data to the large community of ANDRA researchers, partners and service contractors in a single fluid way whatever the source of the data. - Help external communication through a user friendly and easy to understand presentation of the recorded data. S.A.G.D fulfils these objectives by: - Making available in real time, and through a single system, all experimental data under acquisition at the MHM Center and Mont Terri laboratory, - Displaying the recorded data on temporal windows and specific time step, - Allowing remote control of the experimentations, - Ensuring the traceability of all recorded information, - Ensuring data storage in a data base. S.A.G.D has been deployed in the first experimental drift at -445 m in November 2004. It was subsequently extended to the underground Mont Terri laboratory in Switzerland in 2005 and to the entire surface logging network of the Meuse / Haute-Marne Center in 2008. The SAGD computer network is an autonomous network consisting of optical fiber links which transmits experimentation data to the servers and computers in the control room, whether they originate from the bottom of a borehole or surface level. A high speed link between Mont Terri and Bure allows remote control of all the experimentations and the centralization of all

  18. Proceedings of workshop on distributed computing and network

    International Nuclear Information System (INIS)

    Abe, F.; Yuasa, F.

    1993-02-01

    'Distributed Computing and Network' is one of hot topics in the field of computing. Recent progress in the computer technology is providing new paradigm for computing even in High Energy Physics. Particularly the workstation based computer system is opening new active field of computer application to sciences. The major topics discussed in this symposium are distributed computing and wide area research network for domestic and international link. The two days symposium provided so enough topics to foresee the next direction of our computing environment. 70 people have got together to discuss on these interesting thema as well as information exchange on the computer technologies. (J.P.N.)

  19. Spatio-Temporal Patterns of the International Merger and Acquisition Network.

    Science.gov (United States)

    Dueñas, Marco; Mastrandrea, Rossana; Barigozzi, Matteo; Fagiolo, Giorgio

    2017-09-07

    This paper analyses the world web of mergers and acquisitions (M&As) using a complex network approach. We use data of M&As to build a temporal sequence of binary and weighted-directed networks for the period 1995-2010 and 224 countries (nodes) connected according to their M&As flows (links). We study different geographical and temporal aspects of the international M&A network (IMAN), building sequences of filtered sub-networks whose links belong to specific intervals of distance or time. Given that M&As and trade are complementary ways of reaching foreign markets, we perform our analysis using statistics employed for the study of the international trade network (ITN), highlighting the similarities and differences between the ITN and the IMAN. In contrast to the ITN, the IMAN is a low density network characterized by a persistent giant component with many external nodes and low reciprocity. Clustering patterns are very heterogeneous and dynamic. High-income economies are the main acquirers and are characterized by high connectivity, implying that most countries are targets of a few acquirers. Like in the ITN, geographical distance strongly impacts the structure of the IMAN: link-weights and node degrees have a non-linear relation with distance, and an assortative pattern is present at short distances.

  20. The Use of Computer-Based Videogames in Knowledge Acquisition and Retention.

    Science.gov (United States)

    Ricci, Katrina E.

    1994-01-01

    Research conducted at the Naval Training Systems Center in Orlando, Florida, investigated the acquisition and retention of basic knowledge with subject matter presented in the forms of text, test, and game. Results are discussed in terms of the effectiveness of computer-based games for military training. (Author/AEF)

  1. Architecture of the global land acquisition system: applying the tools of network science to identify key vulnerabilities

    International Nuclear Information System (INIS)

    Seaquist, J W; Li Johansson, Emma; Nicholas, Kimberly A

    2014-01-01

    Global land acquisitions, often dubbed ‘land grabbing’ are increasingly becoming drivers of land change. We use the tools of network science to describe the connectivity of the global acquisition system. We find that 126 countries participate in this form of global land trade. Importers are concentrated in the Global North, the emerging economies of Asia, and the Middle East, while exporters are confined to the Global South and Eastern Europe. A small handful of countries account for the majority of land acquisitions (particularly China, the UK, and the US), the cumulative distribution of which is best described by a power law. We also find that countries with many land trading partners play a disproportionately central role in providing connectivity across the network with the shortest trading path between any two countries traversing either China, the US, or the UK over a third of the time. The land acquisition network is characterized by very few trading cliques and therefore characterized by a low degree of preferential trading or regionalization. We also show that countries with many export partners trade land with countries with few import partners, and vice versa, meaning that less developed countries have a large array of export partnerships with developed countries, but very few import partnerships (dissassortative relationship). Finally, we find that the structure of the network is potentially prone to propagating crises (e.g., if importing countries become dependent on crops exported from their land trading partners). This network analysis approach can be used to quantitatively analyze and understand telecoupled systems as well as to anticipate and diagnose the potential effects of telecoupling. (letter)

  2. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  3. The use of neural networks in the D0 data acquisition system

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Sornborger, A.; Astur, R.V.; Johnson, C.R.; Zeller, R.T.

    1989-01-01

    We discuss the possible application of algorithms derived from neural networks to the D0 experiment. The D0 data acquisition system is based on a large farm of MicroVAXes, each independently performing real-time event filtering. A new generation of multiport memories in each MicroVAX node will enable special function processors to have direct access to event data. We describe an exploratory study of back propagation neural networks, such as might be configured in the nodes, for more efficient event filtering. 9 refs., 3 figs., 1 tab

  4. Analysis of Computer Network Information Based on "Big Data"

    Science.gov (United States)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  5. Email networks and the spread of computer viruses

    Science.gov (United States)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  6. Data acquisition system in TPE-1RM15

    International Nuclear Information System (INIS)

    Yagi, Yasuyuki; Yahagi, Eiichi; Hirano, Yoichi; Shimada, Toshio; Hirota, Isao; Maejima, Yoshiki

    1991-01-01

    The data acquisition system for TPE-1RM15 reversed field pinch machine had been developed and has recently been completed. Thd data to be acquired consist of many channels of time series data which come from plasma diagnostics. The newly developed data acquisition system uses CAMAC (Computer Automated Measurement And Control) system as a front end data acquisition system and micro-VAX II for control, file management and analyses. Special computer programs, DAQR/D, have been developed for data acquisition routine. Experimental setting and process controlling items are managed by a parameter database in a shared common region and every task can easily refer to it. The acquired data are stored into a mass storage system (total of 1.3GBytes plus a magnetic tape system) including an optical disk system, which can save storage space and allow quick reference. At present, the CAMAC system has 88 (1MHz sampling) and 64(5kHz sampling) channels corresponding to 1.6 MBytes per shot. The data acquisition system can finish one routine within 5 minutes with 1.6MBytes data depending on the amount of graphic outputs. Hardwares and softwares of the system are specified so that the system can be easily expanded. The computer is connected to the AIST Ethernet and the system can be remotely accessed and the acquired data can be transferred to the mainframes on the network. Details about specifications and performance of the system are given in this report. (author)

  7. Data acquisition system using a C 90-10 computer

    International Nuclear Information System (INIS)

    Smiljanic, Gabro

    1969-05-01

    The aim of this study is to make possible the acquisition of experimental data by the memory of a numerical calculator. These data come from analog-to-digital converters that analyze the amplitude of the pulses provided by detectors. Normally the computer executes its main program (data processing, transfer on magnetic tape, visualization, etc.). When information is available at the output of a converter, an interruption of the main program is requested, and after agreement a subroutine supports access to the information in the computer. The author has also considered the bi- and tri-parametric acquisition. The calculator and the converters are commercial devices (calculator C 90-10 from CII and converter CA 12 or CA 25 from Intertechnique), while the systems of adaptation of the input-output levels and the visualization were studied and realized at the CEA Saclay. An interface device was built to connect the converters; it's the cable part of the system. On the other hand, the programs necessary for the operation of the calculator have been studied and developed; it is the program aspect of the system. As far as possible the interface is designed to be universal, i.e. it must be able to work with other brands of equipment. The acquisition of the data is carried out in two phases: a) the converter expresses the amplitude of the input signal in the form of a binary number which is transferred into the interface at the same time as an interruption of the main program is asked. b) After acceptance of this interruption, the subprogram supports the transfer of the information of the interface in the computer, then adds a unit to the word located at the address determined from the information received. In other words, the system behaves like an amplitude analyzer whose operation is well known. But it is of a much more flexible use because of the possibilities of quick adaptation of the programs to the needs of the considered experiment, of the possibility to treat

  8. Networking DEC and IBM computers

    Science.gov (United States)

    Mish, W. H.

    1983-01-01

    Local Area Networking of DEC and IBM computers within the structure of the ISO-OSI Seven Layer Reference Model at a raw signaling speed of 1 Mops or greater are discussed. After an introduction to the ISO-OSI Reference Model nd the IEEE-802 Draft Standard for Local Area Networks (LANs), there follows a detailed discussion and comparison of the products available from a variety of manufactures to perform this networking task. A summary of these products is presented in a table.

  9. CAMAC data acquisition system based on micro VAXII

    International Nuclear Information System (INIS)

    Yin Xijin; Shen Cuihua; Bai Xiaowei; Li Weisheng

    1993-01-01

    The CAMAC data acquisition system based on Micro VAXII Computer provides high-speed, Zero-suppressed, and 256-parameter CAMAC acquisition. It consists of three parts: control logic unit, CAMAC readout system and host computer system. When the control logical unit is triggered by external electronic selection signal, it produces a pilot signal to keep all of the parameters of a particular event together. Event-model data have been collected by using a CAMAC Fast Crate controller. The host computer system, in hard environment, is equipped with certain peripheral device. It includes the following: 1. at least two M990 GCR, 6250B/inch, magnetic tape driver operating at 75 inches per second or faster; 2. a Tektronix 4014 storage scope; 3. a laser printer, LND3-AE or copier which is capable of making hard-copies of Tektronix 4014 screen; 4. a control console device and a line printer; 5. x-press color graphics terminal; 6. DEC network. When the system is in real-time acquisition, it is able, on-line, to handle and analyse data stream, to monitor and control experiment and to display dynamically spectra on the Tektronix 4014

  10. Improved Seismic Acquisition System and Data Processing for the Italian National Seismic Network

    Science.gov (United States)

    Badiali, L.; Marcocci, C.; Mele, F.; Piscini, A.

    2001-12-01

    A new system for acquiring and processing digital signals has been developed in the last few years at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). The system makes extensive use of the internet communication protocol standards such as TCP and UDP which are used as the transport highway inside the Italian network, and possibly in a near future outside, to share or redirect data among processes. The Italian National Seismic Network has been working for about 18 years equipped with vertical short period seismometers and transmitting through analog lines, to the computer center in Rome. We are now concentrating our efforts on speeding the migration towards a fully digital network based on about 150 stations equipped with either broad band or 5 seconds sensors connected to the data center partly through wired digital communication and partly through satellite digital communication. The overall process is layered through intranet and/or internet. Every layer gathers data in a simple format and provides data in a processed format, ready to be distributed towards the next layer. The lowest level acquires seismic data (raw waveforms) coming from the remote stations. It handshakes, checks and sends data in LAN or WAN according to a distribution list where other machines with their programs are waiting for. At the next level there are the picking procedures, or "pickers", on a per instrument basis, looking for phases. A picker spreads phases, again through the LAN or WAN and according to a distribution list, to one or more waiting locating machines tuned to generate a seismic event. The event locating procedure itself, the higher level in this stack, can exchange information with other similar procedures. Such a layered and distributed structure with nearby targets allows other seismic networks to join the processing and data collection of the same ongoing event, creating a virtual network larger than the original one. At present we plan to cooperate with other

  11. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    Science.gov (United States)

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  12. Remote control of data acquisition devices by means of message oriented middleware

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.; Pereira, A.; Vega, J.; Kirpitchev, I.

    2007-01-01

    The TJ-II autonomous acquisition systems are computers running dedicated applications for programming and controlling data acquisition channels and also integrating acquired data into the central database. These computers are located in the experimental hall and have to be remotely controlled during plasma discharges. A remote control for these systems has been implemented by taking advantage of the message-oriented middleware recently introduced into the TJ-II data acquisition system. Java Message Service (JMS) is used as the messaging application program interface. All the acquisition actions that are available through the system console of the data acquisition computers (starting or aborting an acquisition, restarting the system or updating the acquisition application) can now be initiated remotely. Command messages are sent to the acquisition systems located in the experimental hall close to the TJ-II device by using the messaging software, without having to use a remote desktop application that produces heavy network traffic and requires manual operation. Action commands can be sent to only one or to several/many acquisition systems at the same time. This software is integrated into the TJ-II remote participation system and the acquisition systems can be commanded from inside or outside the laboratory. All this software is integrated into the security framework provided by PAPI, thus preventing non-authorized users commanding the acquisition computers. In order to dimension and distribute messaging services some performance tests of the message oriented middleware software have been carried out. Results of the tests are presented. As suggested by the tests results different transport connectors are used: TCP transport protocol is used for the local environment, while HTTP protocol is used for remote accesses, thereby allowing the system performance to be optimized

  13. Remote control of data acquisition devices by means of message oriented middleware

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain)], E-mail: edi.sanchez@ciemat.es; Portas, A.; Pereira, A.; Vega, J.; Kirpitchev, I. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain)

    2007-10-15

    The TJ-II autonomous acquisition systems are computers running dedicated applications for programming and controlling data acquisition channels and also integrating acquired data into the central database. These computers are located in the experimental hall and have to be remotely controlled during plasma discharges. A remote control for these systems has been implemented by taking advantage of the message-oriented middleware recently introduced into the TJ-II data acquisition system. Java Message Service (JMS) is used as the messaging application program interface. All the acquisition actions that are available through the system console of the data acquisition computers (starting or aborting an acquisition, restarting the system or updating the acquisition application) can now be initiated remotely. Command messages are sent to the acquisition systems located in the experimental hall close to the TJ-II device by using the messaging software, without having to use a remote desktop application that produces heavy network traffic and requires manual operation. Action commands can be sent to only one or to several/many acquisition systems at the same time. This software is integrated into the TJ-II remote participation system and the acquisition systems can be commanded from inside or outside the laboratory. All this software is integrated into the security framework provided by PAPI, thus preventing non-authorized users commanding the acquisition computers. In order to dimension and distribute messaging services some performance tests of the message oriented middleware software have been carried out. Results of the tests are presented. As suggested by the tests results different transport connectors are used: TCP transport protocol is used for the local environment, while HTTP protocol is used for remote accesses, thereby allowing the system performance to be optimized.

  14. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  15. Computing with networks of nonlinear mechanical oscillators.

    Directory of Open Access Journals (Sweden)

    Jean C Coulombe

    Full Text Available As it is getting increasingly difficult to achieve gains in the density and power efficiency of microelectronic computing devices because of lithographic techniques reaching fundamental physical limits, new approaches are required to maximize the benefits of distributed sensors, micro-robots or smart materials. Biologically-inspired devices, such as artificial neural networks, can process information with a high level of parallelism to efficiently solve difficult problems, even when implemented using conventional microelectronic technologies. We describe a mechanical device, which operates in a manner similar to artificial neural networks, to solve efficiently two difficult benchmark problems (computing the parity of a bit stream, and classifying spoken words. The device consists in a network of masses coupled by linear springs and attached to a substrate by non-linear springs, thus forming a network of anharmonic oscillators. As the masses can directly couple to forces applied on the device, this approach combines sensing and computing functions in a single power-efficient device with compact dimensions.

  16. Computer Networks and Globalization

    Directory of Open Access Journals (Sweden)

    J. Magliaro

    2007-07-01

    Full Text Available Communication and information computer networks connect the world in ways that make globalization more natural and inequity more subtle. As educators, we look at these phenomena holistically analyzing them from the realist’s view, thus exploring tensions, (in equity and (injustice, and from the idealist’s view, thus embracing connectivity, convergence and development of a collective consciousness. In an increasingly market- driven world we find examples of openness and human generosity that are based on networks, specifically the Internet. After addressing open movements in publishing, software industry and education, we describe the possibility of a dialectic equilibrium between globalization and indigenousness in view of ecologically designed future smart networks

  17. Computer networks and their implications for nuclear data

    International Nuclear Information System (INIS)

    Carlson, J.

    1992-01-01

    Computer networks represent a valuable resource for accessing information. Just as the computer has revolutionized the ability to process and analyze information, networks have and will continue to revolutionize data collection and access. A number of services are in routine use that would not be possible without the presence of an (inter)national computer network (which will be referred to as the internet). Services such as electronic mail, remote terminal access, and network file transfers are almost a required part of any large scientific/research organization. These services only represent a small fraction of the potential uses of the internet; however, the remainder of this paper discusses some of these uses and some technological developments that may influence these uses

  18. Data acquisition

    International Nuclear Information System (INIS)

    Clout, P.N.

    1982-01-01

    Data acquisition systems are discussed for molecular biology experiments using synchrotron radiation sources. The data acquisition system requirements are considered. The components of the solution are described including hardwired solutions and computer-based solutions. Finally, the considerations for the choice of the computer-based solution are outlined. (U.K.)

  19. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  20. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  1. The Caltech physics/engineering network

    International Nuclear Information System (INIS)

    Melvin, J.D.

    1985-01-01

    The California Institute of Technology Physics/Engineering network (referred to as the ''Caltech network'' in this paper) is a software system which has been developed over the last four years for high-speed data acquisition, graphics, and distributed computer resource communications. This paper presents: a brief history of past and current development of the network software; features currently implemented; current speed performance; and applications of the network to research and education at Caltech and at other institutions

  2. Computing motion using resistive networks

    Science.gov (United States)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  3. Computer systems and networks status and perspectives

    CERN Document Server

    Zacharov, V

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor technology are reviewed and the associated peripheral components are discussed in the light of the prevailing rapid pace of developments. Particular emphasis is given to the impact of very large scale integrated circuitry in these developments. Computer networks are considered in some detail, including common-carrier and local-area networks, and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated.

  4. Computer systems and networks: Status and perspectives

    International Nuclear Information System (INIS)

    Zacharov, Z.

    1981-01-01

    The properties of computers are discussed, both as separate units and in inter-coupled systems. The main elements of modern processor thechnology are reviewed and the associated peripheral components are disscussed in the light of the prevailling rapid pace of developments. Particular emphais is given to the impact of very large scale integrated circuitry in these developments. Computer networks, and considered in some detail, including comon-carrier and local-area networks and the problem of inter-working is included in the discussion. Components of network systems and the associated technology are also among the topics treated. (orig.)

  5. Social networks a framework of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2014-01-01

    This volume provides the audience with an updated, in-depth and highly coherent material on the conceptually appealing and practically sound information technology of Computational Intelligence applied to the analysis, synthesis and evaluation of social networks. The volume involves studies devoted to key issues of social networks including community structure detection in networks, online social networks, knowledge growth and evaluation, and diversity of collaboration mechanisms.  The book engages a wealth of methods of Computational Intelligence along with well-known techniques of linear programming, Formal Concept Analysis, machine learning, and agent modeling.  Human-centricity is of paramount relevance and this facet manifests in many ways including personalized semantics, trust metric, and personal knowledge management; just to highlight a few of these aspects. The contributors to this volume report on various essential applications including cyber attacks detection, building enterprise social network...

  6. Forecasting the Acquisition of University Spin-Outs: An RBF Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Weiwei Liu

    2017-01-01

    Full Text Available University spin-outs (USOs, creating businesses from university intellectual property, are a relatively common phenomena. As a knowledge transfer channel, the spin-out business model is attracting extensive attention. In this paper, the impacts of six equities on the acquisition of USOs, including founders, university, banks, business angels, venture capitals, and other equity, are comprehensively analyzed based on theoretical and empirical studies. Firstly, the average distribution of spin-out equity at formation is calculated based on the sample data of 350 UK USOs. According to this distribution, a radial basis function (RBF neural network (NN model is employed to forecast the effects of each equity on the acquisition. To improve the classification accuracy, the novel set-membership method is adopted in the training process of the RBF NN. Furthermore, a simulation test is carried out to measure the effects of six equities on the acquisition of USOs. The simulation results show that the increase of university’s equity has a negative effect on the acquisition of USOs, whereas the increase of remaining five equities has positive effects. Finally, three suggestions are provided to promote the development and growth of USOs.

  7. 1989 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1990-01-01

    These Proceedings contain written versions of lectures delivered at the 1989 CERN School of Computing and covering a variety of topics. Vector and parallel computing are the subjects of five papers: experience with vector processors in HEP; scientific computing with transputers; applications of large transputer arrays; neutral networks and neurocomputers; and Amdahl's scaling law. Data-acquisition and event reconstruction methods were covered in two series of lectures reproduced here: microprocessor-based data-acquisition systems for HERA experiments, and track and vertex fitting. Two speakers treated applications of expert systems: artificial intelligence and expert systems - applications in data acquisition, and application of diagnostic expert systems in the accelerator domain. Other lecture courses covered past and present computing practice: 30 years of computing at CERN, and information processing at the Aerospace Institute in the Federal Republic of Germany. Other papers cover the contents of lectures on: HEPnet, where we are and where we are going; ciphering algorithms; integrated circuit design for supercomputers; GaAs versus Si, theory and practice; and an introduction to operating systems. (orig.)

  8. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  9. Computer data-acquisition and control system for Thomson-scattering measurements

    International Nuclear Information System (INIS)

    Stewart, K.A.; Foskett, R.D.; Kindsfather, R.R.; Lazarus, E.A.; Thomas, C.E.

    1983-03-01

    The Thomson-Scattering Diagnostic System (SCATPAK II) used to measure the electron temperature and density in the Impurity Study Experiment is interfaced to a Perkin-Elmer 8/32 computer that operates under the OS/32 operating system. The calibration, alignment, and operation of this diagnostic are all under computer control. Data acquired from 106 photomultiplier tubes installed on 15 spectrometers are transmitted to the computer by eighteen 12-channel, analog-to-digital integrators along a CAMAC serial highway. With each laser pulse, 212 channels of data are acquired: 106 channels of signal plus background and 106 channels of background only. Extensive use of CAMAC instrumentation enables large amounts of data to be acquired and control processes to be performed in a time-dependent environment. The Thomson-scattering computer system currently operates in three modes: user interaction and control, data acquisition and transmission, and data analysis. This paper discusses the development and implementation of this system as well as data storage and retrieval

  10. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    Science.gov (United States)

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  11. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  12. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  13. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  14. Spin networks and quantum computation

    International Nuclear Information System (INIS)

    Kauffman, L.; Lomonaco, S. Jr.

    2008-01-01

    We review the q-deformed spin network approach to Topological Quantum Field Theory and apply these methods to produce unitary representations of the braid groups that are dense in the unitary groups. The simplest case of these models is the Fibonacci model, itself universal for quantum computation. We here formulate these braid group representations in a form suitable for computation and algebraic work. (authors)

  15. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    Science.gov (United States)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  16. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    Backend Distributed Datastore High-­‐level   Objec.ve   Network   Policy   Perf.   Metrics   SNAT  IP   Alloca.on   Controller...azure.microsoft.com/. 114 [16] Microsoft Azure ExpressRoute. http://azure.microsoft.com/en-us/ services/expressroute/. [17] Mobility and Networking...Networking Technologies, Services, and Protocols; Performance of Computer and Commu- nication Networks; Mobile and Wireless Communications Systems

  17. Discussion on the Technology and Method of Computer Network Security Management

    Science.gov (United States)

    Zhou, Jianlei

    2017-09-01

    With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.

  18. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo; Abdelaziz, Ibrahim; Aldilaijan, Abdulla; Canini, Marco; Kalnis, Panos

    2017-01-01

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers' computation time.

  19. In-Network Computation is a Dumb Idea Whose Time Has Come

    KAUST Repository

    Sapio, Amedeo

    2017-11-27

    Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose DAIET, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9%-89.3%) and a similar decrease in the workers\\' computation time.

  20. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  1. Line-plane broadcasting in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  2. Effects of Social Networking on Iranian EFL Learners’ Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Fatemeh Khansarian-Dehkordi

    2017-01-01

    Full Text Available The study aimed to scrutinize social networking effects on Iranian EFL learners’ vocabulary acquisition. Eighty Iranian EFL learners at the intermediate level participated in a pretest-posttest study after taking the placement test. They were then divided into an experimental group whose participants were supposed to equip their mobile phones or tablet PCs with a social networking application, that is, Line and form an online group to take part in eighteen virtual instructional sessions. Participants of the control group, however, underwent classroom learning during which target words were presented through routine classroom activities. Results of the independent-samples t-test in the posttest indicated that participants of the experimental group outperformed those of the control group. Results have important implications for both pedagogy and theory, especially socio-cultural theories of second language development.

  3. Multi-channel data acquisition system with absolute time synchronization

    Science.gov (United States)

    Włodarczyk, Przemysław; Pustelny, Szymon; Budker, Dmitry; Lipiński, Marcin

    2014-11-01

    We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics - GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.

  4. Code 672 observational science branch computer networks

    Science.gov (United States)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  5. Network and computing infrastructure for scientific applications in Georgia

    Science.gov (United States)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  6. 2XIIB computer data acquisition system

    International Nuclear Information System (INIS)

    Tyler, G.C.

    1975-01-01

    All major plasma diagnostic measurements from the 2XIIB experiment are recorded, digitized, and stored by the computer data acquisition system. The raw data is then examined, correlated, reduced, and useful portions are quickly retrieved which direct the future conduct of the plasma experiment. This is done in real time and on line while the data is current. The immediate availability of this pertinent data has accelerated the rate at which the 2XII personnel have been able to gain knowledge in the study of plasma containment and fusion interaction. The up time of the experiment is being used much more effectively than ever before. This paper describes the hardware configuration of our data system in relation to various plasma parameters measured, the advantages of powerful software routines to reduce and correlate the data, the present plans for expansion of the system, and the problems we have had to overcome in certain areas to meet our original goals

  7. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  8. Software network analyzer for computer network performance measurement planning over heterogeneous services in higher educational institutes

    OpenAIRE

    Ismail, Mohd Nazri

    2009-01-01

    In 21st century, convergences of technologies and services in heterogeneous environment have contributed multi-traffic. This scenario will affect computer network on learning system in higher educational Institutes. Implementation of various services can produce different types of content and quality. Higher educational institutes should have a good computer network infrastructure to support usage of various services. The ability of computer network should consist of i) higher bandwidth; ii) ...

  9. Computer system design description for SY-101 hydrogen mitigation test project data acquisition and control system (DACS-1). Revision 1

    International Nuclear Information System (INIS)

    Truitt, R.W.

    1994-01-01

    This document provides descriptions of components and tasks that are involved in the computer system for the data acquisition and control of the mitigation tests conducted on waste tank SY-101 at the Hanford Nuclear Reservation. The system was designed and implemented by Los alamos National Laboratory and supplied to Westinghouse Hanford Company. The computers (both personal computers and specialized data-taking computers) and the software programs of the system will hereafter collectively be referred to as the DACS (Data Acquisition and Control System)

  10. Proceedings of workshop on 'future in HEP computing'

    International Nuclear Information System (INIS)

    Karita, Yukio; Amako, Katsuya; Watase, Yoshiyuki

    1993-12-01

    The workshop was held on March 11 and 12, 1993, at the National Laboratory for High Energy Physics (KEK). The large flow from the conventional system centering around large versatile computers to the down-sizing taking distributed processing systems in it is formed, but its destination is not yet seen. As the concrete themes of 'future in HEP computing', problems toward down-sizing and the approach, future perspective of the networks, and adaptation of software engineering and pointing to object were taken up. At the workshop, lectures were given on requirements in HEP computing, possible solutions from Hitachi and Fujitsu, and network computing with work-stations regarding down-sizing and HEP computing; approaches in INS and KEK regarding future computing system in HEP laboratories; user requirement for future network, network service available in 1995-2005, multi-media communication and network protocols regarding future networks; object-oriented approach for software development, OOP for real time data acquisition and accelerator control; ProdiG activities and future of FORTRAN, F90 and HPF regarding OOP and physics, and trends in software development methodology. (K.I.)

  11. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han

    2016-07-11

    Connectivity and layout of underlying networks largely determine agent behavior and usage in many environments. For example, transportation networks determine the flow of traffic in a neighborhood, whereas building floorplans determine the flow of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications. Such specifications can be in the form of network density, travel time versus network length, traffic type, destination location, etc. We propose an integer programming-based approach that guarantees that the resultant networks are valid by fulfilling all the specified hard constraints and that they score favorably in terms of the objective function. We evaluate our algorithm in two different design settings, street layout and floorplans to demonstrate that diverse networks can emerge purely from high-level functional specifications.

  12. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  13. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  14. An introduction to computer networks

    CERN Document Server

    Rizvi, SAM

    2011-01-01

    AN INTRODUCTION TO COMPUTER NETWORKS is a comprehensive text book which is focused and designed to elaborate the technical contents in the light of TCP/IP reference model exploring both digital and analog data communication. Various communication protocols of different layers are discussed along with their pseudo-code. This book covers the detailed and practical information about the network layer alongwith information about IP including IPV6, OSPF, and internet multicasting. It also covers TCP congestion control and emphasizes on the basic principles of fundamental importance concerning the technology and architecture and provides detailed discussion of leading edge topics of data communication, LAN & Network Layer.

  15. 1987 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1988-01-01

    These Proceedings contain written versions of most of the lectures delivered at the 1987 CERN School of Computing. Five lecture series treated various aspects of data communications: integrated services networks, standard LANs and optical LANs, open systems networking in practice, and distributed operating systems. Present and future computer architectures were covered and an introduction to vector processing was given, followed by lectures on vectorization of pattern recognition and Monte Carlo code. Aspects of computing in high-energy physics were treated in lectures on data acquisition and analysis at LEP, on data-base systems in high-energy physics experiments, and on Fastbus. The experience gained with personal work stations was also presented. Various other topics were covered: the use of computers in number theory and in astronomy, fractals, and computer security and access control. (orig.)

  16. SPLAI: Computational Finite Element Model for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ruzana Ishak

    2006-01-01

    Full Text Available Wireless sensor network refers to a group of sensors, linked by a wireless medium to perform distributed sensing task. The primary interest is their capability in monitoring the physical environment through the deployment of numerous tiny, intelligent, wireless networked sensor nodes. Our interest consists of a sensor network, which includes a few specialized nodes called processing elements that can perform some limited computational capabilities. In this paper, we propose a model called SPLAI that allows the network to compute a finite element problem where the processing elements are modeled as the nodes in the linear triangular approximation problem. Our model also considers the case of some failures of the sensors. A simulation model to visualize this network has been developed using C++ on the Windows environment.

  17. Choice Of Computer Networking Cables And Their Effect On Data ...

    African Journals Online (AJOL)

    Computer networking is the order of the day in this Information and Communication Technology (ICT) age. Although a network can be through a wireless device most local connections are done using cables. There are three main computer-networking cables namely coaxial cable, unshielded twisted pair cable and the optic ...

  18. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  19. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  20. The Computer Book of the Internal Medicine Resident: competence acquisition and achievement of learning objectives.

    Science.gov (United States)

    Oristrell, J; Oliva, J C; Casanovas, A; Comet, R; Jordana, R; Navarro, M

    2014-01-01

    The Computer Book of the Internal Medicine resident (CBIMR) is a computer program that was validated to analyze the acquisition of competences in teams of Internal Medicine residents. To analyze the characteristics of the rotations during the Internal Medicine residency and to identify the variables associated with the acquisition of clinical and communication skills, the achievement of learning objectives and resident satisfaction. All residents of our service (n=20) participated in the study during a period of 40 months. The CBIMR consisted of 22 self-assessment questionnaires specific for each rotation, with items on services (clinical workload, disease protocolization, resident responsibilities, learning environment, service organization and teamwork) and items on educational outcomes (acquisition of clinical and communication skills, achievement of learning objectives, overall satisfaction). Associations between services features and learning outcomes were analyzed using bivariate and multivariate analysis. An intense clinical workload, high resident responsibilities and disease protocolization were associated with the acquisition of clinical skills. High clinical competence and teamwork were both associated with better communication skills. Finally, an adequate learning environment was associated with increased clinical competence, the achievement of educational goals and resident satisfaction. Potentially modifiable variables related with the operation of clinical services had a significant impact on the acquisition of clinical and communication skills, the achievement of educational goals, and resident satisfaction during the specialized training in Internal Medicine. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  1. Fair Secure Computation with Reputation Assumptions in the Mobile Social Networks

    Directory of Open Access Journals (Sweden)

    Yilei Wang

    2015-01-01

    Full Text Available With the rapid development of mobile devices and wireless technologies, mobile social networks become increasingly available. People can implement many applications on the basis of mobile social networks. Secure computation, like exchanging information and file sharing, is one of such applications. Fairness in secure computation, which means that either all parties implement the application or none of them does, is deemed as an impossible task in traditional secure computation without mobile social networks. Here we regard the applications in mobile social networks as specific functions and stress on the achievement of fairness on these functions within mobile social networks in the presence of two rational parties. Rational parties value their utilities when they participate in secure computation protocol in mobile social networks. Therefore, we introduce reputation derived from mobile social networks into the utility definition such that rational parties have incentives to implement the applications for a higher utility. To the best of our knowledge, the protocol is the first fair secure computation in mobile social networks. Furthermore, it finishes within constant rounds and allows both parties to know the terminal round.

  2. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  3. LightKone Project: Lightweight Computation for Networks at the Edge

    OpenAIRE

    Van Roy, Peter; TEKK Tour Digital Wallonia

    2017-01-01

    LightKone combines two recent advances in distributed computing to enable general-purpose computing on edge networks: * Synchronization-free programming: Large-scale applications can run efficiently on edge networks by using convergent data structures (based on Lasp and Antidote from previous project SyncFree) → tolerates dynamicity and loose coupling of edge networks * Hybrid gossip: Communication can be made highly resilient on edge networks by combining gossip with classical distributed al...

  4. Centralized configuration system for a large scale farm of network booted computers

    Science.gov (United States)

    Ballestrero, S.; Brasolin, F.; Dârlea, G.-L.; Dumitru, I.; Scannicchio, D. A.; Twomey, M. S.; Vâlsan, M. L.; Zaytsev, A.

    2012-12-01

    The ATLAS trigger and data acquisition online farm is composed of nearly 3,000 computing nodes, with various configurations, functions and requirements. Maintaining such a cluster is a big challenge from the computer administration point of view, thus various tools have been adopted by the System Administration team to help manage the farm efficiently. In particular, a custom central configuration system, ConfDBv2, was developed for the overall farm management. The majority of the systems are network booted, and are running an operating system image provided by a Local File Server (LFS) via the local area network (LAN). This method guarantees the uniformity of the system and allows, in case of issues, very fast recovery of the local disks which could be used as scratch area. It also provides greater flexibility as the nodes can be reconfigured and restarted with a different operating system in a very timely manner. A user-friendly web interface offers a quick overview of the current farm configuration and status, allowing changes to be applied on selected subsets or on the whole farm in an efficient and consistent manner. Also, various actions that would otherwise be time consuming and error prone can be quickly and safely executed. We describe the design, functionality and performance of this system and its web-based interface, including its integration with other CERN and ATLAS databases and with the monitoring infrastructure.

  5. A new DoD initiative: the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program

    International Nuclear Information System (INIS)

    Arevalo, S; Atwood, C; Bell, P; Blacker, T D; Dey, S; Fisher, D; Fisher, D A; Genalis, P; Gorski, J; Harris, A; Hill, K; Hurwitz, M; Kendall, R P; Meakin, R L; Morton, S; Moyer, E T; Post, D E; Strawn, R; Veldhuizen, D v; Votta, L G

    2008-01-01

    In FY2008, the U.S. Department of Defense (DoD) initiated the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a $360M program with a two-year planning phase and a ten-year execution phase. CREATE will develop and deploy three computational engineering tool sets for DoD acquisition programs to use to design aircraft, ships and radio-frequency antennas. The planning and execution of CREATE are based on the 'lessons learned' from case studies of large-scale computational science and engineering projects. The case studies stress the importance of a stable, close-knit development team; a focus on customer needs and requirements; verification and validation; flexible and agile planning, management, and development processes; risk management; realistic schedules and resource levels; balanced short- and long-term goals and deliverables; and stable, long-term support by the program sponsor. Since it began in FY2008, the CREATE program has built a team and project structure, developed requirements and begun validating them, identified candidate products, established initial connections with the acquisition programs, begun detailed project planning and development, and generated the initial collaboration infrastructure necessary for success by its multi-institutional, multidisciplinary teams

  6. Time of acquisition and network stability in pediatric resting-state functional magnetic resonance imaging

    NARCIS (Netherlands)

    T.J.H. White (Tonya); R.L. Muetzel (Ryan); M. Schmidt (Marcus); S.J.E. Langeslag (Sandra); V.W.V. Jaddoe (Vincent); A. Hofman (Albert); V.D. Calhoun Vince D. (V.); F.C. Verhulst (Frank); H.W. Tiemeier (Henning)

    2014-01-01

    textabstractResting-state functional magnetic resonance imaging (rs-fMRI) has been shown to elucidate reliable patterns of brain networks in both children and adults. Studies in adults have shown that rs-fMRI acquisition times of ∼5 to 6 min provide adequate sampling to produce stable spatial maps

  7. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  8. Use of VME computers for the data acquisition system of the PHOENICS experiment

    International Nuclear Information System (INIS)

    Zucht, B.

    1989-10-01

    The data acquisition program PHON (PHOENICS ONLINE) for the PHOENICS-experiment at the stretcher ring ELSA in Bonn is described. PHON is based on a fast parallel CAMAC readout with special VME-front-end-processors (VIP) and a VAX computer, allowing comfortable control and programming. Special tools have been developed to facilitate the implementation of user programs. The PHON-compiler allows to specify the arrangement of the CAMAC-modules to be read out for each event (camaclist) using a simple language. The camaclist is translated in 68000 Assembly and runs on the front-end-processors, making high data rates possible. User programs for monitoring and control of the experiment normally require low data rates and therefore run on the VAX computer. CAMAC operations are supported by the PHON CAMAC-Library. For graphic representation of the data the CERN standard program libraries HBOOK and PAW are used. The data acquisition system is very flexible and can be easily adapted to different experiments. (orig.)

  9. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  10. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  11. Multi-channel data acquisition system with absolute time synchronization

    Energy Technology Data Exchange (ETDEWEB)

    Włodarczyk, Przemysław, E-mail: pan.wlodarczyk@uj.edu.pl [Department of Electronics, AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków (Poland); Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków (Poland); Pustelny, Szymon, E-mail: pustelny@uj.edu.pl [Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków (Poland); Budker, Dmitry [Department of Physics, University of California at Berkeley, Berkeley, California 94720-7300 (United States); Lipiński, Marcin [Department of Electronics, AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków (Poland)

    2014-11-01

    We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics – GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.

  12. Multiple network alignment on quantum computers

    Science.gov (United States)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-12-01

    Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given graphs as input, alignment algorithms use topological information to assign a similarity score to each -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  13. The Effect of Computer Simulations on Acquisition of Knowledge and Cognitive Load: A Gender Perspective

    Science.gov (United States)

    Kaheru, Sam J.; Kriek, Jeanne

    2016-01-01

    A study on the effect of the use of computer simulations (CS) on the acquisition of knowledge and cognitive load was undertaken with 104 Grade 11 learners in four schools in rural South Africa on the physics topic geometrical optics. Owing to the lack of resources a teacher-centred approach was followed in the use of computer simulations. The…

  14. The ALICE data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Dénes, E. [Research Institute for Particle and Nuclear Physics, Wigner Research Center, Budapest (Hungary); Divià, R.; Fuchs, U. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Grigore, A. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Politehnica Univesity of Bucharest, Bucharest (Romania); Kiss, T. [Cerntech Ltd., Budapest (Hungary); Simonetti, G. [Dipartimento Interateneo di Fisica ‘M. Merlin’, Bari (Italy); Soós, C.; Telesca, A.; Vande Vyvre, P. [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland); Haller, B. von, E-mail: bvonhall@cern.ch [European Organization for Nuclear Research (CERN), Geneva 23 (Switzerland)

    2014-03-21

    In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with a presentation of all the major components of the systems, as currently realized. We also report on the performance achieved during the first period of data taking (from 2009 to 2013) often exceeding those specified in the DAQ Technical Design Report.

  15. Gamma camera image acquisition, display, and processing with the personal microcomputer

    International Nuclear Information System (INIS)

    Lear, J.L.; Pratt, J.P.; Roberts, D.R.; Johnson, T.; Feyerabend, A.

    1990-01-01

    The authors evaluated the potential of a microcomputer for direct acquisition, display, and processing of gamma camera images. Boards for analog-to-digital conversion and image zooming were designed, constructed, and interfaced to the Macintosh II (Apple Computer, Cupertino, Calif). Software was written for processing of single, gated, and time series images. The system was connected to gamma cameras, and its performance was compared with that of dedicated nuclear medicine computers. Data could be acquired from gamma cameras at rates exceeding 200,000 counts per second, with spatial resolution exceeding intrinsic camera resolution. Clinical analysis could be rapidly performed. This system performed better than most dedicated nuclear medicine computers with respect to speed of data acquisition and spatial resolution of images while maintaining full compatibility with the standard image display, hard-copy, and networking formats. It could replace such dedicated systems in the near future as software is refined

  16. Software development on the DIII-D control and data acquisition computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; McHarg, B.B. Jr.; Piglowski, D.

    1997-11-01

    The various software systems developed for the DIII-D tokamak have played a highly visible and important role in tokamak operations and fusion research. Because of the heavy reliance on in-house developed software encompassing all aspects of operating the tokamak, much attention has been given to the careful design, development and maintenance of these software systems. Software systems responsible for tokamak control and monitoring, neutral beam injection, and data acquisition demand the highest level of reliability during plasma operations. These systems made up of hundreds of programs totaling thousands of lines of code have presented a wide variety of software design and development issues ranging from low level hardware communications, database management, and distributed process control, to man machine interfaces. The focus of this paper will be to describe how software is developed and managed for the DIII-D control and data acquisition computers. It will include an overview and status of software systems implemented for tokamak control, neutral beam control, and data acquisition. The issues and challenges faced developing and managing the large amounts of software in support of the dynamic and everchanging needs of the DIII-D experimental program will be addressed

  17. Local computer network of the JINR Neutron Physics Laboratory

    International Nuclear Information System (INIS)

    Alfimenkov, A.V.; Vagov, V.A.; Vajdkhadze, F.

    1988-01-01

    New high-speed local computer network, where intelligent network adapter (NA) is used as hardware base, is developed in the JINR Neutron Physics Laboratory to increase operation efficiency and data transfer rate. NA consists of computer bus interface, cable former, microcomputer segment designed for both program realization of channel-level protocol and organization of bidirectional transfer of information through direct access channel between monochannel and computer memory with or witout buffering in NA operation memory device

  18. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    Science.gov (United States)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as

  19. Status of DOE information network modifications

    International Nuclear Information System (INIS)

    Fuchs, R.

    1988-01-01

    This paper provides an update on changes that have been made or are taking place to the Department of Energy's (DOE) National Information Network. Areas of focus are as follows: data acquisition from commercial disposal site operators, specifically, the information delivery system called Manifest Information Management System; improved access methods to DOE Information Network; progress on personal computer interfaces, and availability of end user support

  20. Future data acquisition at ISIS

    International Nuclear Information System (INIS)

    Pulford, W.C.A.; Quinton, S.P.H.; Johnson, M.W.; Norris, J.

    1989-01-01

    Data collection techniques at ISIS are fast reaching the point where the current computer systems will no longer be able to migrate the data to long-term storage, let alone enable their analysis at a speed compatible with continuous use of the ISIS instruments. The current data acquisition electronics (DAE 1) and migration path work effectively but have a number of inherent difficulties: (1) Seven instruments are equipped with VAX computers as their Front End Minicomputers (FEM). Unfortunately these machines usually possess insufficient processor power to perform some of the more complex data reduction. This means that the raw data have necessarily to be networked to the HUB computer before analysis. (2) The size of bulk store memory is restricted to 16 Mbytes by the 24 bit address field of Multibus. (3) The DAE error detection and analysis system of FEM is crude. It is clear that the most effective method to improve on this situation is to reduce the data volume flowing between the DAE and the FEM and to provide facilities to monitor data acquisition within the DAE. For these purposes processing power must be incorporated closer to the point of data collection. It has been decided to implement processing elements within DAE 2 (the next generation of DAE) in the form of intelligent memory boards. 6 figs., 1 tab

  1. Genetic networks and soft computing.

    Science.gov (United States)

    Mitra, Sushmita; Das, Ranajit; Hayashi, Yoichi

    2011-01-01

    The analysis of gene regulatory networks provides enormous information on various fundamental cellular processes involving growth, development, hormone secretion, and cellular communication. Their extraction from available gene expression profiles is a challenging problem. Such reverse engineering of genetic networks offers insight into cellular activity toward prediction of adverse effects of new drugs or possible identification of new drug targets. Tasks such as classification, clustering, and feature selection enable efficient mining of knowledge about gene interactions in the form of networks. It is known that biological data is prone to different kinds of noise and ambiguity. Soft computing tools, such as fuzzy sets, evolutionary strategies, and neurocomputing, have been found to be helpful in providing low-cost, acceptable solutions in the presence of various types of uncertainties. In this paper, we survey the role of these soft methodologies and their hybridizations, for the purpose of generating genetic networks.

  2. Using the network as bus system for long discharge data acquisition and data processing

    Energy Technology Data Exchange (ETDEWEB)

    Hennig, Ch. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Teilinstitut Greifswald, Wendelsteinstrasse 1, 17491 Greifswald (Germany)]. E-mail: Christine.Hennig@ipp.mpg.de; Bluhm, T. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Teilinstitut Greifswald, Wendelsteinstrasse 1, 17491 Greifswald (Germany); Heimann, P. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, 85748 Garching (Germany); Kroiss, H. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, 85748 Garching (Germany); Kuehner, G. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Teilinstitut Greifswald, Wendelsteinstrasse 1, 17491 Greifswald (Germany); Kuehntopf, H. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Teilinstitut Greifswald, Wendelsteinstrasse 1, 17491 Greifswald (Germany); Maier, J. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, 85748 Garching (Germany); Zilker, M. [Max-Planck-Institut Fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, 85748 Garching (Germany)

    2006-07-15

    Data acquisition in shot-based fusion experiments is characterized by a large amount of data collected within a short time. The stellarator experiment Wendelstein 7-X (W7-X) will produce a considerable higher amount of data in a continuous and steady-state operation within half an hour. Therefore, it is not sufficient to wait until all data has been acquired and stored in the database before being able to view and analyze data. Continuous and robust data transport is needed for (1) a distributed data monitoring system to visualize trends (2) continuous data acquisition of data available at the network and (3) message signaling. The paper introduces a network channel concept covering a system-wide available data description, half sync/half async communication pattern, data serialization, identification and matching of data. It is focusing on the IP Multicast/UDP implementation for the data monitoring system and presents results of performance measurements. Suitability and limitations are discussed and an appropriate answer to the question 'When is IP Multicast/UDP reliable?' is given.

  3. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  4. Functional requirements for gas characterization system computer software

    International Nuclear Information System (INIS)

    Tate, D.D.

    1996-01-01

    This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel

  5. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  6. Good year, bad year: changing strategies, changing networks? A two-year study on seed acquisition in northern Cameroon

    Directory of Open Access Journals (Sweden)

    Chloé Violon

    2016-06-01

    Full Text Available Analysis of seed exchange networks at a single point in time may reify sporadic relations into apparently fixed and long-lasting ones. In northern Cameroon, where environment is not only strongly seasonal but also shows unpredictable interannual variation, farmers' social networks are flexible from year to year. When adjusting their strategies, Tupuri farmers do not systematically solicit the same partners to acquire the desired propagules. Seed acquisitions documented during a single cropping season may thus not accurately reflect the underlying larger social network that can be mobilized at the local level. To test this hypothesis, we documented, at the outset of two cropping seasons (2010 and 2011, the relationships through which seeds were acquired by the members of 16 households in a Tupuri community. In 2011, farmers faced sudden failure of the rains and had to solicit distant relatives, highlighting their ability to quickly trigger specific social relations to acquire necessary seeding material. Observing the same set of individuals during two successive years and the seed sources they solicited in each year enabled us to discriminate repeated relations from sporadic ones. Although farmers did not acquire seeds from the same individuals from one year to the next, they relied on quite similar relational categories of people. However, the worse weather conditions during the second year led to (1 a shift from red sorghum seeds to pearl millet seeds, (2 a geographical extension of the network, and (3 an increased participation of women in seed acquisitions. In critical situations, women mobilized their own kin almost exclusively. We suggest that studying the seed acquisition network over a single year provides a misrepresentation of the underlying social network. Depending on the difficulties farmers face, they may occasionally call on relationships that transcend the local relationships used each year.

  7. Quantum Random Networks for Type 2 Quantum Computers

    National Research Council Canada - National Science Library

    Allara, David L; Hasslacher, Brosl

    2006-01-01

    Random boolean networks (RBNs) have been studied theoretically and computationally in order to be able to use their remarkable self-healing and large basins of altercation properties as quantum computing architectures, especially...

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  9. Computing Tutte polynomials of contact networks in classrooms

    Science.gov (United States)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  10. Data-flow Performance Optimisation on Unreliable Networks: the ATLAS Data-Acquisition Case

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2015-01-01

    Abstract The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. Abstract A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently bursty nature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. Abstr...

  11. Data-flow performance optimization on unreliable networks: the ATLAS data-acquisition case

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2014-01-01

    The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently bursty nature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. In this presentation we...

  12. CAMAC based computer--computer communications via microprocessor data links

    International Nuclear Information System (INIS)

    Potter, J.M.; Machen, D.R.; Naivar, F.J.; Elkins, E.P.; Simmonds, D.D.

    1976-01-01

    Communications between the central control computer and remote, satellite data acquisition/control stations at The Clinton P. Anderson Meson Physics Facility (LAMPF) is presently accomplished through the use of CAMAC based Data Link Modules. With the advent of the microprocessor, a new philosophy for digital data communications has evolved. Data Link modules containing microprocessor controllers provide link management and communication network protocol through algorithms executed in the Data Link microprocessor

  13. ICAMS: a new system for automated emulsion data acquisition and analysis

    International Nuclear Information System (INIS)

    Arthur, A.A.; Brown, W.L.; Friedlander, E.M.; Heckman, H.H.; Jones, R.W.; Karant, Y.J.; Turney, A.D.

    1983-01-01

    This chapter describes an Interactive Computer Assisted Measurement System (ICAMS) designed to permit the acquisition and analysis of emulsion scan and measurement data at a rate much faster than any existing manual techniques. The system has two major components, the central computer and individual data-taking stations, called ODS (Optical Data Station). It is a modern distributed network system, where a central PDP-11 computer running under RSX-11M V4 communicates with two-ported UNIBUS memory. Each ODS is equipped with a 6512 microprocessor using the Motorola bus and is also equipped with a Creative Micro Systems 9611 arithmetic processor

  14. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  15. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  16. A prototype switched Ethernet data acquisition system

    International Nuclear Information System (INIS)

    Ye Gaoying; Deng Huichen; Chen Liaoyuan; Liu Li; Wang Xinhui

    1999-01-01

    A prototype switched Ethernet data acquisition system has been built up and successfully operated in HL-1M tokamak experiments. The system is based on a switched high bandwidth Ethernet network with which the CAMAC crates are directly interfaced. It takes the advanced features of LAN switch and Ethernet CAMAC controller (ECC 1365 MK III, HYTEC product) to avoid the rewriting of CAMAC driver for an individual computer system and to ensure high data transmission rate between CAMAC system and host computers on the network. It is a new approach to DAS system architecture and provides a solution for a well-known bottleneck problem in traditional distributed DAS system for fusion research. An average throughput of the test system reaches over 100 Mbps. The system features also an easy and low cost migration from traditional distributed DAS system. In the paper, the hardware configuration, software structure, performance of the system and the method of migrating from current DAS system are discussed in detail. (orig.)

  17. Integrating Computer-Assisted Language Learning in Saudi Schools: A Change Model

    Science.gov (United States)

    Alresheed, Saleh; Leask, Marilyn; Raiker, Andrea

    2015-01-01

    Computer-assisted language learning (CALL) technology and pedagogy have gained recognition globally for their success in supporting second language acquisition (SLA). In Saudi Arabia, the government aims to provide most educational institutions with computers and networking for integrating CALL into classrooms. However, the recognition of CALL's…

  18. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  19. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  20. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  1. Computer system design description for the spare pump mini-dacs data acquisition and control system

    International Nuclear Information System (INIS)

    Vargo, G.F. Jr.

    1994-01-01

    The attached document outlines the computer software design for the mini data acquisition and control system (DACS), that supports the testing of the spare pump for Tank 241-SY-101, at the maintenance and storage facility (MASF)

  2. The University of Michigan's Computer-Aided Engineering Network.

    Science.gov (United States)

    Atkins, D. E.; Olsen, Leslie A.

    1986-01-01

    Presents an overview of the Computer-Aided Engineering Network (CAEN) of the University of Michigan. Describes its arrangement of workstations, communication networks, and servers. Outlines the factors considered in hardware and software decision making. Reviews the program's impact on students. (ML)

  3. Conceptual metaphors in computer networking terminology ...

    African Journals Online (AJOL)

    Lakoff & Johnson, 1980) is used as a basic framework for analysing and explaining the occurrence of metaphor in the terminology used by computer networking professionals in the information technology (IT) industry. An analysis of linguistic ...

  4. Automation and schema acquisition in learning elementary computer programming : implications for the design of practice

    NARCIS (Netherlands)

    van Merrienboer, Jeroen J.G.; van Merrienboer, J.J.G.; Paas, Fred G.W.C.

    1990-01-01

    Two complementary processes may be distinguished in learning a complex cognitive skill such as computer programming. First, automation offers task-specific procedures that may directly control programming behavior, second, schema acquisition offers cognitive structures that provide analogies in new

  5. Data acquisition and analysis at the Structural Biology Center

    International Nuclear Information System (INIS)

    Westbrook, M.L.; Coleman, T.A.; Daly, R.T.; Pflugrath, J.W.

    1996-01-01

    The Structural Biology Center (SBC), a national user facility for macromolecular crystallography located at Argonne National Laboratory's Advanced Photon Source, is currently being built and commissioned. SBC facilities include a bending-magnet beamline, an insertion-device beamline, laboratory and office space adjacent to the beamlines, and associated instrumentation, experimental apparatus, and facilities. SBC technical facilities will support anomalous dispersion phasing experiments, data collection from microcrystals, data collection from crystals with large molecular structures and rapid data collection from multiple related crystal structures for protein engineering and drug design. The SBC Computing Systems and Software Engineering Group is tasked with developing the SBC Control System, which includes computing systems, network, and software. The emphasis of SBC Control System development has been to provide efficient and convenient beamline control, data acquisition, and data analysis for maximal facility and experimenter productivity. This paper describes the SBC Control System development, specifically data acquisition and analysis at the SBC, and the development methods used to meet this goal

  6. RESEARCH OF ENGINEERING TRAFFIC IN COMPUTER UZ NETWORK USING MPLS TE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    V. M. Pakhomovа

    2014-12-01

    Full Text Available Purpose. In railway transport of Ukraine one requires the use of computer networks of different technologies: Ethernet, Token Bus, Token Ring, FDDI and others. In combined computer networks on the railway transport it is necessary to use packet switching technology in multiprotocol networks MPLS (MultiProtocol Label Switching more effectively. They are based on the use of tags. Packet network must transmit different types of traffic with a given quality of service. The purpose of the research is development a methodology for determining the sequence of destination flows for the considered fragment of computer network of UZ. Methodology. When optimizing traffic management in MPLS networks has the important role of technology traffic engineering (Traffic Engineering, TE. The main mechanism of TE in MPLS is the use of unidirectional tunnels (MPLS TE tunnel to specify the path of the specified traffic. The mathematical model of the problem of traffic engineering in computer network of UZ technology MPLS TE was made. Computer UZ network is represented with the directed graph, their vertices are routers of computer network, and each arc simulates communication between nodes. As an optimization criterion serves the minimum value of the maximum utilization of the TE-tunnel. Findings. The six options destination flows were determined; rational sequence of flows was found, at which the maximum utilization of TE-tunnels considered a simplified fragment of a computer UZ network does not exceed 0.5. Originality. The method of solving the problem of traffic engineering in Multiprotocol network UZ technology MPLS TE was proposed; for different classes its own way is laid, depending on the bandwidth and channel loading. Practical value. Ability to determine the values of the maximum coefficient of use of TE-tunnels in computer UZ networks based on developed software model «TraffEng». The input parameters of the model: number of routers, channel capacity, the

  7. Distributed CAN-BUS-based data acquisition system

    International Nuclear Information System (INIS)

    Orekhov, D.I.; Chepurnov, A.S.; Majmistov, D.I.; Sabel'nikov, A.A.

    2007-01-01

    A distributed remote-control system for large nuclear physical setups is intended to collect, store, and analyze data arriving from detecting devices and to visualize them via the WEB. The system uses the CAN industrial data transmission network and the DeviceNet high-level protocol. The hardware part is the set of controllers, which convert signals of the detecting devices into a frequency and transmit them in the digital form via the CAN network to the host computer. The software realizes the DeviceNet protocol stack, which ensures the data acquisition and transmission. The user interface is based on dynamic WEB pages. The system is used for monitoring dark noises of photomultiplier tubes in the BOREXINO neutrino detector (Italy) [ru

  8. An Overview of Computer Network security and Research Technology

    OpenAIRE

    Rathore, Vandana

    2016-01-01

    The rapid development in the field of computer networks and systems brings both convenience and security threats for users. Security threats include network security and data security. Network security refers to the reliability, confidentiality, integrity and availability of the information in the system. The main objective of network security is to maintain the authenticity, integrity, confidentiality, availability of the network. This paper introduces the details of the technologies used in...

  9. Recurrent autoassociative networks and holistic computations

    NARCIS (Netherlands)

    Stoianov, [No Value; Amari, SI; Giles, CL; Gori, M; Piuri,

    2000-01-01

    The paper presents an experimental study of holistic computations over distributed representations (DRs) of sequences developed by the Recurrent Autoassociative Networks (KAN). Three groups of holistic operators are studied: extracting symbols at fixed position, extracting symbols at a variable

  10. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    International Nuclear Information System (INIS)

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations

  11. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  12. Computer network defense through radial wave functions

    Science.gov (United States)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  13. Data of NODDI diffusion metrics in the brain and computer simulation of hybrid diffusion imaging (HYDI acquisition scheme

    Directory of Open Access Journals (Sweden)

    Chandana Kodiweera

    2016-06-01

    Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].

  14. TMX-U computer system in evolution

    International Nuclear Information System (INIS)

    Casper, T.A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-01-01

    Over the past three years, the total TMX-U diagnsotic data base has grown to exceed 10 megabytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 minutes per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the paralled acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate

  15. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  16. Developments of the general computer network of NIPNE-HH

    International Nuclear Information System (INIS)

    Mirica, M.; Constantinescu, S.; Danet, A.

    1997-01-01

    Since 1991 the general computer network of NIPNE-HH was developed and connected to RNCN (Romanian National Computer Network) for research and development and it offers to the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNCN is targeted on the following main objectives: Setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - Providing a rapid and competitive tool for the exchange of information in the framework of Research and Development (R-D) community; - Using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - Providing a support for information, scientific and technical co-operation. RNCN has two international links: to EBONE via ACONET (64kbps) and to EuropaNET via Hungarnet (64 kbps). The guiding principle in designing the project of general computer network of NIPNE-HH, as part of RNCN, was to implement an open system based on OSI standards taking into account the following criteria: - development of a flexible solution, according to OSI specifications; - solutions of reliable gateway with the existing network already in use,allowing the access to the worldwide networks; - using the TCP/IP transport protocol for each Local Area Network (LAN) and for the connection to RNCN; - ensuring the integration of different and heterogeneous software and hardware platforms (DOS, Windows, UNIX, VMS, Linux, etc) through some specific interfaces. The major objectives achieved in direction of developing the general computer network of NIPNE-HH are: - linking all the existing and newly installed computer equipment and providing an adequate connectivity. LANs from departments

  17. Six networks on a universal neuromorphic computing substrate

    Directory of Open Access Journals (Sweden)

    Thomas ePfeil

    2013-02-01

    Full Text Available In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  18. Six networks on a universal neuromorphic computing substrate.

    Science.gov (United States)

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  19. Latest developments for a computer aided thermohydraulic network

    International Nuclear Information System (INIS)

    Alemberti, A.; Graziosi, G.; Mini, G.; Susco, M.

    1999-01-01

    Thermohydraulic networks are I-D systems characterized by a small number of basic components (pumps, valves, heat exchangers, etc) connected by pipes and limited spatially by a defined number of boundary conditions (tanks, atmosphere, etc). The network system is simulated by the well known computer program RELAPS/mod3. Information concerning the network geometry component behaviour, initial and boundary conditions are usually supplied to the RELAPS code using an ASCII input file by means of 'input cards'. CATNET (Computer Aided Thermalhydraulic NETwork) is a graphically user interface that, under specific user guidelines which completely define its range of applicability, permits a very high level of standardization and simplification of the RELAPS/mod3 input deck development process as well as of the output processing. The characteristics of the components (pipes, valves, pumps etc), defining the network system can be entered through CATNET. The CATNET interface is provided by special functions to compute form losses in the most typical bending and branching configurations. When the input of all system components is ready, CATNET is able to generate the RELAPS/mod3 input file. Finally, by means of CATNET, the RELAPS/mod3 code can be run and its output results can be transformed to an intuitive display form. The paper presents an example of application of the CATNET interface as well as the latest developments which greatly simplified the work of the users and allowed to reduce the possibility of input errors. (authors)

  20. A versatile computer based system for data acquisition manipulation and presentation

    International Nuclear Information System (INIS)

    Bardsley, D.J.

    1985-12-01

    A data acquisition system based on the Microdata M 1600L data logger and a Digital Equipment Corporation (DEC) VT103 computer system has been set up for use in a wide range of research and development projects in the field of fission detectors and associated technology. The philosophy underlying the system and its important features are described. Operating instructions for the logger are given, and its application to experimental measurements is considered. Observations on whole system performance, and recommendations for improvements are made. (U.K.)

  1. Using a progress computer for the direct acquisition and processing of radiation protection data

    International Nuclear Information System (INIS)

    Barz, H.G.; Borchardt, K.D.; Hacke, J.; Kirschfeld, K.E.; Kluppak, B.

    1976-01-01

    A process computer will be used in the Hahn-Meitner-Institute to rationalize radiation protection measures. Appr. 150 transmitters are to be connected with this computer. Especially the radiation measuring devices of a nuclear reactor, of hot cells, and of a heavy ion accelerator, as well as the emission- and environment monitoring systems will be connected. The advantages of this method are described: central data acquisition, central alarm and stoppage information, data processing of certain measurement values, possibility of quick disturbance analysis. Furthermore the authors report about the preparations already finished, particularly about data transmission of digital and analog values to the computer. (orig./HP) [de

  2. Networking Micro-Processors for Effective Computer Utilization in Nursing

    OpenAIRE

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in...

  3. ORGANIZATION OF CLOUD COMPUTING INFRASTRUCTURE BASED ON SDN NETWORK

    Directory of Open Access Journals (Sweden)

    Alexey A. Efimenko

    2013-01-01

    Full Text Available The article presents the main approaches to cloud computing infrastructure based on the SDN network in present data processing centers (DPC. The main indexes of management effectiveness of network infrastructure of DPC are determined. The examples of solutions for the creation of virtual network devices are provided.

  4. Biophysical constraints on the computational capacity of biochemical signaling networks

    Science.gov (United States)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  5. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  6. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  7. A Lossless Network for Data Acquisition

    OpenAIRE

    Jereczek, Grzegorz Edmund; Lehmann Miotto, Giovanna

    2016-01-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial-off-the-shelf servers, using the ATLAS experiment as a case study. In this paper we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion...

  8. Development of network based control and data acquisition systems for diagnostics using CCD detectors. Application to LHD experiments

    International Nuclear Information System (INIS)

    Kado, Shinichiro; Nakanishi, Hideya; Ida, Katsumi; Kojima, Mamoru

    2000-01-01

    The needs of CCD detectors as a plasma diagnostic tool have recently been increased. However, many CCD providers have developed their own controlling systems, and it is difficult to customized the usages in order to make them applicable to the network-based data acquisition clusters which consist of various sorts of diagnostics. This paper presents the development of systems in which CCD detectors are controlled and the data are acquired through networks. By making use of the Client/Server (C/S) model in the Windows NT operating system and block transfer method via shared memory relevant to the model, the dependence on the hardware is hidden by the server service, CCD list sequencer. The client program is designed for the LHD (Large Helical Device) discharge operation sequences and the data acquisition system. (author)

  9. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  10. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  11. Autonomous data acquisition system for Paks NPP process noise signals

    International Nuclear Information System (INIS)

    Lipcsei, S.; Kiss, S.; Czibok, T.; Dezso, Z.; Horvath, Cs.

    2005-01-01

    A prototype of a new concept noise diagnostics data acquisition system has been developed recently to renew the aged present system. This new system is capable of collecting the whole available noise signal set simultaneously. Signal plugging and data acquisition are performed by autonomous systems (installed at each reactor unit) that are controlled through the standard plant network from a central computer installed at a suitable location. Experts can use this central unit to process and archive data series downloaded from the reactor units. This central unit also provides selected noise diagnostics information for other departments. The paper describes the hardware and software architecture of the new system in detail, emphasising the potential benefits of the new approach. (author)

  12. Multiple-user data acquisition and analysis system

    International Nuclear Information System (INIS)

    Manzella, V.; Chrien, R.E.; Gill, R.L.; Liou, H.I.; Stelts, M.L.

    1981-01-01

    The nuclear physics program at the Brookhaven National Laboratory High Flux Beam Reactor (HFBR) employs a pair of PDP-11 computers for the dual functions of data acquisition and analysis. The data acquisition is accomplished through CAMAC and features a microprogrammed branch driver to accommodate various experimental inputs. The acquisition computer performs the functions of multi-channel analyzers, multiscaling and time-sequenced multichannel analyzers and gamma-ray coincidence analyzers. The data analysis computer is available for rapid processing of data tapes written by the acquisition computer. The ability to accommodate many users is facilitated by separating the data acquisition and analysis functions, and allowing each user to tailor the analysis to the specific requirements of his own experiment. The system is to be upgraded soon by the introduction of a dual port disk to allow a data base to be available to each computer

  13. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  14. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  15. Computing all hybridization networks for multiple binary phylogenetic input trees.

    Science.gov (United States)

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  16. Development of a Data acquisition node for the environmental radiological monitoring network

    International Nuclear Information System (INIS)

    Benitez S, R.

    2000-01-01

    This work consists in the development of a data acquisition system related to the radiation levels of each one of the laboratories or installations of the ININ, setting up an interconnection of the existing radiological monitoring system (RMS-II), with a personal computer through the use and programming of a data acquisition card (PC-LPM-16). This system works in real time, presenting on the PC screen the measurement of each one of the connected detectors, the alarm signals that are being had been when the allowed levels have been exceeded and the registry of these alarm levels; for their later processing in rapidity graphs of exposure against time. Also aspects such as : brief history of the ionizing radiations and their nature, their interaction with matter and the method which were discovered are treated as well as how it is possible to detect and to measure them. The radiological protection and the standards on which that is based. The RMSII radiation monitoring system and its calibration as well as the obtention of performance curves are treated too. Finally, it is described the performance of the Data acquisition circuit, the based programming in virtual instrumentation and its importance. (Author)

  17. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  18. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  19. The Role of Computer Networks in Aerospace Engineering.

    Science.gov (United States)

    Bishop, Ann Peterson

    1994-01-01

    Presents selected results from an empirical investigation into the use of computer networks in aerospace engineering based on data from a national mail survey. The need for user-based studies of electronic networking is discussed, and a copy of the questionnaire used in the survey is appended. (Contains 46 references.) (LRW)

  20. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  1. Effective Data Acquisition Protocol for Multi-Hop Heterogeneous Wireless Sensor Networks Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ahmed M. Khedr

    2015-10-01

    Full Text Available In designing wireless sensor networks (WSNs, it is important to reduce energy dissipation and prolong network lifetime. Clustering of nodes is one of the most effective approaches for conserving energy in WSNs. Cluster formation protocols generally consider the heterogeneity of sensor nodes in terms of energy difference of nodes but ignore the different transmission ranges of them. In this paper, we propose an effective data acquisition clustered protocol using compressive sensing (EDACP-CS for heterogeneous WSNs that aims to conserve the energy of sensor nodes in the presence of energy and transmission range heterogeneity. In EDACP-CS, cluster heads are selected based on the distance from the base station and sensor residual energy. Simulation results show that our protocol offers a much better performance than the existing protocols in terms of energy consumption, stability, network lifetime, and throughput.

  2. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    Science.gov (United States)

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  3. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  4. A method for improved 4D-computed tomography data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kupper, Martin; Sprengel, Wolfgang [Technische Univ. Graz (Austria). Inst. fuer Materialphysik; Winkler, Peter; Zurl, Brigitte [Medizinische Univ. Graz (Austria). Comprehensive Cancer Center

    2017-05-01

    In four-dimensional time-dependent computed tomography (4D-CT) of the lungs, irregularities in breathing movements can cause errors in data acquisition, or even data loss. We present a method based on sending a synthetic, regular breathing signal to the CT instead of the real signal, which ensures 4D-CT data sets without data loss. Subsequent correction of the signal based on the real breathing curve enables an accurate reconstruction of the size and movement of the target volume. This makes it possible to plan radiation treatment based on the obtained data. The method was tested with dynamic thorax phantom measurements using synthetic and real breathing patterns.

  5. ''Big Dee'' upgrade of the Doublet III diagnostic data acquisition computer system

    International Nuclear Information System (INIS)

    Mcharg, B.B.

    1983-01-01

    The ''Big Dee'' upgrade of the Doublet III tokamak facility will begin operation in 1986 with an initial quantity of data expected to be 10 megabytes per shot and eventually attaining 20-25 megabytes per shot. This is in comparison to the 4-5 megabytes of data currently acquired. To handle this greater quantity of data and to serve physics needs for significantly improved between-shot processing of data will require a substantial upgrade of the existing data acquisition system. The key points of the philosophy that have been adopted for the upgraded system to handle the greater quantity of data are (1) preserve existing hardware, (2) preserve existing software; (3) configure the system in a modular fashion; and (4) distribute the data acquisition over multiple computers. The existing system using ModComp CLASSIC 16 bit minicomputers is capable of handling 5 megabytes of data per shot

  6. Big Dee upgrade of the Doublet III diagnostic data acquisition computer system

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1983-12-01

    The Big Dee upgrade of the Doublet III tokamak facility will begin operation in 1986 with an initial quantity of data expected to be 10 megabytes per shot and eventually attaining 20 to 25 megabytes per shot. This is in comparison to the 4 to 5 megabytes of data currently acquired. To handle this greater quantity of data and to serve physics needs for significantly improved between-shot processing of data will require a substantial upgrade of the existing data acquisition system. The key points of the philosophy that have been adopted for the upgraded system to handle the greater quantity of data are (1) preserve existing hardware; (2) preserve existing software; (3) configure the system in a modular fashion; and (4) distribute the data acquisition over multiple computers. The existing system using ModComp CLASSIC 16 bit minicomputers is capable of handling 5 megabytes of data per shot

  7. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  8. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    Science.gov (United States)

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  9. Advanced data acquisition system for SEVAN

    Science.gov (United States)

    Chilingaryan, Suren; Chilingarian, Ashot; Danielyan, Varuzhan; Eppler, Wolfgang

    2009-02-01

    Huge magnetic clouds of plasma emitted by the Sun dominate intense geomagnetic storm occurrences and simultaneously they are correlated with variations of spectra of particles and nuclei in the interplanetary space, ranging from subtermal solar wind ions till GeV energy galactic cosmic rays. For a reliable and fast forecast of Space Weather world-wide networks of particle detectors are operated at different latitudes, longitudes, and altitudes. Based on a new type of hybrid particle detector developed in the context of the International Heliophysical Year (IHY 2007) at Aragats Space Environmental Center (ASEC) we start to prepare hardware and software for the first sites of Space Environmental Viewing and Analysis Network (SEVAN). In the paper the architecture of the newly developed data acquisition system for SEVAN is presented. We plan to run the SEVAN network under one-and-the-same data acquisition system, enabling fast integration of data for on-line analysis of Solar Flare Events. An Advanced Data Acquisition System (ADAS) is designed as a distributed network of uniform components connected by Web Services. Its main component is Unified Readout and Control Server (URCS) which controls the underlying electronics by means of detector specific drivers and makes a preliminary analysis of the on-line data. The lower level components of URCS are implemented in C and a fast binary representation is used for the data exchange with electronics. However, after preprocessing, the data are converted to a self-describing hybrid XML/Binary format. To achieve better reliability all URCS are running on embedded computers without disk and fans to avoid the limited lifetime of moving mechanical parts. The data storage is carried out by means of high performance servers working in parallel to provide data security. These servers are periodically inquiring the data from all URCS and storing it in a MySQL database. The implementation of the control interface is based on high level

  10. Computer-based programs on acquisition of reading skills in schoolchildren (review of contemporary foreign investigations

    Directory of Open Access Journals (Sweden)

    Prikhoda N.A.

    2015-03-01

    Full Text Available The article presents a description of 17 computer-based programs, which were used over the last 5 years (2008—2013 in 15 studies of computer-assisted reading instruction and intervention of schoolchildren. The article includes a description of specificity of various terms used in the above-mentioned studies and the contents of training sessions. The article also carries out a brief analysis of main characteristics of computer-based techniques — language of instruction, age and basic characteristics of students, duration and frequency of training sessions, dependent variables of education. Special attention is paid to efficiency of acquisition of different reading skills through computer-based programs in comparison to traditional school instruction.

  11. Transputer networks for the on-line analysis of fine-grained electromagnetic calorimeter data

    International Nuclear Information System (INIS)

    Girotto, G.L.; Lanceri, L.; Scuri, F.; Zoppolato, E.

    1994-01-01

    Transputer networks, designed to perform parallel computations, are well suited for data acquisition, on-line analysis and second level trigger tasks in high energy physics experiments. Some simple algorithms for the analysis of fine-grained electromagnetic calorimeter data were implemented on two types of transputer networks and tested on real and simulated data from a silicon-tungsten calorimeter. Results are presented on the processing speed, measured in a test setup, and extrapolations to a full size detector and data acquisition system are discussed. ((orig.))

  12. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  13. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    Science.gov (United States)

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  15. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  16. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  17. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  18. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    OpenAIRE

    Wang, Shuo; Zhang, Xing; Zhang, Yan; Wang, Lin; Yang, Juwo; Wang, Wenbo

    2017-01-01

    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at th...

  19. General Data Acquisition Platform for Wireless Sensor Network Based on CC2538

    Directory of Open Access Journals (Sweden)

    Yang Zhi-Jun

    2017-01-01

    Full Text Available Wireless sensor networks are the hotspots of current research and have very wide application prospects. Its front end is a sensor that can sense and check the external world. This paper takes temperature and humidity as the research object, and builds a wireless sensor network data acquisition platform by combining the Internet of things and the WeChat public platform. The platform uses DHT11 temperature and humidity sensors and CC2538 sensor nodes to obtain the relevant data, through the server and database for data access. The combination with WeChat public platform not only allows us to view the temperature and humidity in the WeChat public, but also allows us to understand the environmental changes of the relevant detection area more conveniently and quickly. The effectiveness of the platform is also demonstrated by the collection of temperature and humidity data.

  20. A Multi-Vehicles, Wireless Testbed for Networked Control, Communications and Computing

    Science.gov (United States)

    Murray, Richard; Doyle, John; Effros, Michelle; Hickey, Jason; Low, Steven

    2002-03-01

    We have constructed a testbed consisting of 4 mobile vehicles (with 4 additional vehicles being completed), each with embedded computing and communications capability for use in testing new approaches for command and control across dynamic networks. The system is being used or is planned to be used for testing of a variety of communications-related technologies, including distributed command and control algorithms, dynamically reconfigurable network topologies, source coding for real-time transmission of data in lossy environments, and multi-network communications. A unique feature of the testbed is the use of vehicles that have second order dynamics. Requiring real-time feedback algorithms to stabilize the system while performing cooperative tasks. The testbed was constructed in the Caltech Vehicles Laboratory and consists of individual vehicles with PC-based computation and controls, and multiple communications devices (802.11 wireless Ethernet, Bluetooth, and infrared). The vehicles are freely moving, wheeled platforms propelled by high performance dotted fairs. The room contains an access points for an 802.11 network, overhead visual sensing (to allow emulation of CI'S signal processing), a centralized computer for emulating certain distributed computations, and network gateways to control and manipulate communications traffic.

  1. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  2. The DIVA model: A neural theory of speech acquisition and production.

    Science.gov (United States)

    Tourville, Jason A; Guenther, Frank H

    2011-01-01

    The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralized feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described.

  3. Optimization of Hierarchical System for Data Acquisition

    Directory of Open Access Journals (Sweden)

    V. Novotny

    2011-04-01

    Full Text Available Television broadcasting over IP networks (IPTV is one of a number of network applications that are except of media distribution also interested in data acquisition from group of information resources of variable size. IP-TV uses Real-time Transport Protocol (RTP protocol for media streaming and RTP Control Protocol (RTCP protocol for session quality feedback. Other applications, for example sensor networks, have data acquisition as the main task. Current solutions have mostly problem with scalability - how to collect and process information from large amount of end nodes quickly and effectively? The article deals with optimization of hierarchical system of data acquisition. Problem is mathematically described, delay minima are searched and results are proved by simulations.

  4. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  5. Computer control and data acquisition system for the R.F. Test Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper

  6. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  7. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin Nasaruddin

    2013-09-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  8. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin

    2009-11-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  9. Network architecture test-beds as platforms for ubiquitous computing.

    Science.gov (United States)

    Roscoe, Timothy

    2008-10-28

    Distributed systems research, and in particular ubiquitous computing, has traditionally assumed the Internet as a basic underlying communications substrate. Recently, however, the networking research community has come to question the fundamental design or 'architecture' of the Internet. This has been led by two observations: first, that the Internet as it stands is now almost impossible to evolve to support new functionality; and second, that modern applications of all kinds now use the Internet rather differently, and frequently implement their own 'overlay' networks above it to work around its perceived deficiencies. In this paper, I discuss recent academic projects to allow disruptive change to the Internet architecture, and also outline a radically different view of networking for ubiquitous computing that such proposals might facilitate.

  10. A Multilevel Modeling Approach to Examining Individual Differences in Skill Acquisition for a Computer-Based Task

    OpenAIRE

    Nair, Sankaran N.; Czaja, Sara J.; Sharit, Joseph

    2007-01-01

    This article explores the role of age, cognitive abilities, prior experience, and knowledge in skill acquisition for a computer-based simulated customer service task. Fifty-two participants aged 50–80 performed the task over 4 consecutive days following training. They also completed a battery that assessed prior computer experience and cognitive abilities. The data indicated that overall quality and efficiency of performance improved with practice. The predictors of initial level of performan...

  11. High performance data acquisition with InfiniBand

    International Nuclear Information System (INIS)

    Adamczewski, Joern; Essel, Hans G.; Kurz, Nikolaus; Linev, Sergey

    2008-01-01

    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. In this concept any data filtering is done behind the network. Therefore the network must achieve up to 1 GByte/s bi-directional data transfer per node. Detailed simulations have been done to optimize scheduling mechanisms for such event building networks. For real performance tests InfiniBand has been chosen as one of the fastest available network technology. The measurements of network event building have been performed on different Linux clusters from four to over hundred nodes. Several InfiniBand libraries have been tested like uDAPL, Verbs, or MPI. The tests have been integrated in the data acquisition backbone core software DABC, a general purpose data acquisition library. Detailed results are presented. In the worst cases (over hundred nodes) 50% of the required bandwidth can be already achieved. It seems possible to improve these results by further investigations

  12. Image acquisition, transmission and assignment in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Liu Ximing; Wang Liqiang

    1999-01-01

    The author describes the data acquisition mode and image reconstruction method in 60 Co container inspection system, analyzes the relationship between line pick period and geometry distortion, makes clear the demand to data transmitting rate. It discusses several data communication methods, draws up a plan for network, realizes automatic direction and reasonable assignment of data in the system, cooperation of multi-computer and parallel processing, thus greatly improves the systems inspection efficiency

  13. Computer software design description for the integrated control and data acquisition system LDUA system

    International Nuclear Information System (INIS)

    Aftanas, B.L.

    1998-01-01

    This Computer Software Design Description (CSDD) document provides the overview of the software design for all the software that is part of the integrated control and data acquisition system of the Light Duty Utility Arm System (LDUA). It describes the major software components and how they interface. It also references the documents that contain the detailed design description of the components

  14. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  15. Novel Ethernet Based Optical Local Area Networks for Computer Interconnection

    NARCIS (Netherlands)

    Radovanovic, Igor; van Etten, Wim; Taniman, R.O.; Kleinkiskamp, Ronny

    2003-01-01

    In this paper we present new optical local area networks for fiber-to-the-desk application. Presented networks are expected to bring a solution for having optical fibers all the way to computers. To bring the overall implementation costs down we have based our networks on short-wavelength optical

  16. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  17. Operation of a Data Acquisition, Transfer, and Storage System for the Global Space-Weather Observation Network

    Directory of Open Access Journals (Sweden)

    T Nagatsuma

    2014-10-01

    Full Text Available A system to optimize the management of global space-weather observation networks has been developed by the National Institute of Information and Communications Technology (NICT. Named the WONM (Wide-area Observation Network Monitoring system, it enables data acquisition, transfer, and storage through connection to the NICT Science Cloud, and has been supplied to observatories for supporting space-weather forecast and research. This system provides us with easier management of data collection than our previously employed systems by means of autonomous system recovery, periodical state monitoring, and dynamic warning procedures. Operation of the WONM system is introduced in this report.

  18. Machine learning based Intelligent cognitive network using fog computing

    Science.gov (United States)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  19. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  20. 1984 CERN school of computing

    International Nuclear Information System (INIS)

    1985-01-01

    The eighth CERN School of Computing covered subjects mainly related to computing for elementary-particle physics. These proceedings contain written versions of most of the lectures delivered at the School. Notes on the following topics are included: trigger and data-acquisition plans for the LEP experiments; unfolding methods in high-energy physics experiments; Monte Carlo techniques; relational data bases; data networks and open systems; the Newcastle connection; portable operating systems; expert systems; microprocessors - from basic chips to complete systems; algorithms for parallel computers; trends in supercomputers and computational physics; supercomputing and related national projects in Japan; application of VLSI in high-energy physics, and single-user systems. See hints under the relevant topics. (orig./HSI)

  1. Student Motivation in Computer Networking Courses

    OpenAIRE

    Wen-Jung Hsin, PhD

    2007-01-01

    This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  2. Large-scale brain networks underlying language acquisition in early infancy

    Directory of Open Access Journals (Sweden)

    Fumitaka eHomae

    2011-05-01

    Full Text Available A critical issue in human development is that of whether the language-related areas in the left frontal and temporal regions work as a functional network in preverbal infants. Here, we used 94-channel near-infrared spectroscopy (NIRS to reveal the functional networks in the brains of sleeping 3-month-old infants with and without presenting speech sounds. During the first 3 min, we measured spontaneous brain activation (period 1. After period 1, we provided stimuli by playing Japanese sentences for 3 min (period 2. Finally, we measured brain activation for 3 min without providing the stimulus (period 3, as in period 1. We found that not only the bilateral temporal and temporoparietal regions but also the prefrontal and occipital regions showed oxygenated hemoglobin (oxy-Hb signal increases and deoxygenated hemoglobin (deoxy-Hb signal decreases when speech sounds were presented to infants. By calculating time-lagged cross-correlations and coherences of oxy-Hb signals between channels, we tested the functional connectivity for the 3 periods. The oxy-Hb signals in neighboring channels, as well as their homologous channels in the contralateral hemisphere, showed high correlation coefficients in period 1. Similar correlations were observed in period 2; however, the number of channels showing high correlations was higher in the ipsilateral hemisphere, especially in the anterior-posterior direction. The functional connectivity in period 3 showed a close relationship between the frontal and temporal regions, which was less prominent in period 1, indicating that these regions form the functional networks and work as a hysteresis system that has memory of the previous inputs. We propose a hypothesis that the spatiotemporally large-scale brain networks, including the frontal and temporal regions, underlie speech processing in infants and they might play important roles in language acquisition during infancy.

  3. Quantum computation over the butterfly network

    International Nuclear Information System (INIS)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio

    2011-01-01

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider deterministically performing a two-qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource setting introduced by M. Hayashi [Phys. Rev. A 76, 040301(R) (2007)], which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that unitary operations can be performed without adding any entanglement resource, if and only if the unitary operations are locally unitary equivalent to controlled unitary operations. Our protocol is optimal in the sense that the unitary operations cannot be implemented if we relax the specifications of any of the channels. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.

  4. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    OpenAIRE

    Shuo Gu; Jianfeng Pei

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regula...

  5. Computer Networking Strategies for Building Collaboration among Science Educators.

    Science.gov (United States)

    Aust, Ronald

    The development and dissemination of science materials can be associated with technical delivery systems such as the Unified Network for Informatics in Teacher Education (UNITE). The UNITE project was designed to investigate ways for using computer networking to improve communications and collaboration among university schools of education and…

  6. Computer control and data acquisition system for the Mirror Fusion Test Facility Ion Cyclotron Resonant Heating System (ICRH)

    International Nuclear Information System (INIS)

    Cheshire, D.L.; Thomas, R.A.

    1985-01-01

    The Lawrence Livermore National Laboratory (LLNL) large Mirror Fusion Test Facility (MFTF-B) will employ an Ion Cyclotron Resonant Heating (ICRH) system for plasma startup. As the MFTF-B Industrial Participant, TRW has responsibility for the ICRH system, including development of the data acquisition and control system. During the MFTF-B Supervisory Control and Diagnostic System (SCDS). For subsystem development and checkout at TRW, and for verification and acceptance testing at LLNL, the system will be run from a stand-alone computer system designed to simulate the functions of SCDS. The ''SCDS Simulator'' was developed originally for the MFTF-B ECRH System; descriptions of the hardware and software are updated in this paper. The computer control and data acquisition functions implemented for ICRH are described, including development status, and test schedule at TRW and at LLNL. The application software is written for the SCDS Simulator, but it is programmed in PASCAL and designed to facilitate conversion for use on the SCDS computers

  7. A smart modules network for real time data acquisition: application to biomedical research.

    Science.gov (United States)

    Logier, R; De jonckheere, J; Dassonneville, A; Chaud, P; Jeanne, M

    2009-01-01

    Healthcare monitoring applications require the measurement and the analysis of multiple physiological data. In the field of biomedical research, these data are issued from different devices involving data centralization and synchronization difficulties. In this paper, we describe a smart hardware modules network for biomedical data real time acquisition. This toolkit, composed of multiple electronic modules, allows users to acquire and transmit all kind of biomedical signals and parameters. These highly efficient hardware modules have been developed and tested especially for biomedical studies and used in a large number of clinical investigations.

  8. An Efficient Algorithm for Computing Attractors of Synchronous And Asynchronous Boolean Networks

    Science.gov (United States)

    Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei

    2013-01-01

    Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems. Availability The software package is available at https://sites.google.com/site/desheng619/download. PMID:23585840

  9. Computer network for experimental research using ISDN

    International Nuclear Information System (INIS)

    Ida, Katsumi; Nakanishi, Hideya

    1997-01-01

    This report describes the development of a computer network that uses the Integrated Service Digital Network (ISDN) for real-time analysis of experimental plasma physics and nuclear fusion research. Communication speed, 64/128kbps (INS64) or 1.5Mbps (INS1500) per connection, is independent of how busy the network is. When INS-1500 is used, the communication speed, which is proportional to the public telephone connection fee, can be dynamically varied from 64kbps to 1472kbps (depending on how much data are being transferred using the Bandwidth-on-Demand (BOD) function in the ISDN Router. On-demand dial-up and time-out disconnection reduce the public telephone connection fee by 10%-97%. (author)

  10. Development of high-availability ATCA/PCIe data acquisition instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Correia, Miguel; Sousa, Jorge; Batista, Antonio J.N.; Combo, Alvaro; Santos, Bruno; Rodrigues, Antonio P.; Carvalho, Paulo F.; Goncalves, Bruno [Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade de Lisboa, 1049- 001 Lisboa (Portugal); Correia, Carlos M.B.A. [Centro de Instrumentacao, Dept. de Fisica, Universidade de Coimbra, 3004-516 Coimbra (Portugal)

    2015-07-01

    Latest Fusion energy experiments envision a quasi-continuous operation regime. In consequence, the largest experimental devices, currently in development, specify high-availability (HA) requirements for the whole plant infrastructure. HA features enable the whole facility to perform seamlessly in the case of failure of any of its components, coping with the increasing duration of plasma discharges (steady-state) and assuring safety of equipment, people, environment and investment. IPFN developed a control and data acquisition system, aiming for fast control of advanced Fusion devices, which is thus required to provide such HA features. The system is based on in-house developed Advanced Telecommunication Computing Architecture (ATCA) instrumentation modules - IO blades and data switch blades, establishing a PCIe network on the ATCA shelf's back-plane. The data switch communicates to an external host computer through a PCIe data network. At the hardware management level, the system architecture takes advantage of ATCA native redundancy and hot swap specifications to implement fail-over substitution of IO or data switch blades. A redundant host scheme is also supported by the ATCA/PCIe platform. At the software level, PCIe provides implementation of hot plug services, which translate the hardware changes to the corresponding software/operating system devices. The paper presents how the ATCA and PCIe based system can be setup to perform with the desired degree of HA, thus being suitable for advanced Fusion control and data acquisition systems. (authors)

  11. Development of high-availability ATCA/PCIe data acquisition instrumentation

    International Nuclear Information System (INIS)

    Correia, Miguel; Sousa, Jorge; Batista, Antonio J.N.; Combo, Alvaro; Santos, Bruno; Rodrigues, Antonio P.; Carvalho, Paulo F.; Goncalves, Bruno; Correia, Carlos M.B.A.

    2015-01-01

    Latest Fusion energy experiments envision a quasi-continuous operation regime. In consequence, the largest experimental devices, currently in development, specify high-availability (HA) requirements for the whole plant infrastructure. HA features enable the whole facility to perform seamlessly in the case of failure of any of its components, coping with the increasing duration of plasma discharges (steady-state) and assuring safety of equipment, people, environment and investment. IPFN developed a control and data acquisition system, aiming for fast control of advanced Fusion devices, which is thus required to provide such HA features. The system is based on in-house developed Advanced Telecommunication Computing Architecture (ATCA) instrumentation modules - IO blades and data switch blades, establishing a PCIe network on the ATCA shelf's back-plane. The data switch communicates to an external host computer through a PCIe data network. At the hardware management level, the system architecture takes advantage of ATCA native redundancy and hot swap specifications to implement fail-over substitution of IO or data switch blades. A redundant host scheme is also supported by the ATCA/PCIe platform. At the software level, PCIe provides implementation of hot plug services, which translate the hardware changes to the corresponding software/operating system devices. The paper presents how the ATCA and PCIe based system can be setup to perform with the desired degree of HA, thus being suitable for advanced Fusion control and data acquisition systems. (authors)

  12. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin

    2007-01-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  13. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  14. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  15. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  16. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  17. Performance Confirmation Data Acquisition System

    International Nuclear Information System (INIS)

    D.W. Markman

    2000-01-01

    The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M and O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition software and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application

  18. The challenge of networked enterprises for cloud computing interoperability

    OpenAIRE

    Mezgár, István; Rauschecker, Ursula

    2014-01-01

    Manufacturing enterprises have to organize themselves into effective system architectures forming different types of Networked Enterprises (NE) to match fast changing market demands. Cloud Computing (CC) is an important up to date computing concept for NE, as it offers significant financial and technical advantages beside high-level collaboration possibilities. As cloud computing is a new concept the solutions for handling interoperability, portability, security, privacy and standardization c...

  19. Dynamic configuration of the CMS Data Acquisition cluster

    CERN Document Server

    Bauer, Gerry; Biery, Kurt; Boyer, Vincent; Branson, James; Cano, Eric; Cheung, Harry; Ciganek, Marek; Cittolin, Sergio; Coarasa, Jose Antonio; Deldicque, Christian; Dusinberre, Elizabeth; Erhan, Samim; Fortes Rodrigues, Fabiana; Gigi, Dominique; Glege, Frank; Gomez-Reino, Robert; Gutleber, Johannes; Hatton, Derek; Laurens, Jean-Francois; Lopez Perez, Juan Antonio; Meijers, Frans; Meschi, Emilio; Meyer, Andreas; Mommsen, Remigius K; Moser, Roland; O'Dell, Vivian; Oh, Alexander; Orsini, Luciano; Patras, Vaios; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Shpakov, Dennis; Simon, Sean; Sumorok, Konstanty; Zanetti, Marco

    2010-01-01

    The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents as well a...

  20. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  1. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  2. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  3. A complex network approach to cloud computing

    International Nuclear Information System (INIS)

    Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2016-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )

  4. Technological Integration of Acquisitions in Digital Industries

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Toppenberg, Gustav

    2015-01-01

    providers to extend the platform core and to derive network effects by consolidating platform user groups, and (b) complement providers to create monopoly positions for the complements and for innovation complementarity. To enable these acquisition benefits, acquirers face technological integration...... challenges in process and product integration. Through a case study of Network Solutions Corp. (NSC), a Fortune 500 company that has acquired more than 175 business units, we develop four propositions explaining how the benefits of platform core and complement acquisitions are differently contingent......Acquisitions have become essential tools to retain the technological edge in digital industries. This paper analyses the technological integration challenges in such acquisitions. Acquirers in digital industries are typically platform leaders in platform markets. They acquire (a) other platform...

  5. Educational Technology Network: a computer conferencing system dedicated to applications of computers in radiology practice, research, and education.

    Science.gov (United States)

    D'Alessandro, M P; Ackerman, M J; Sparks, S M

    1993-11-01

    Educational Technology Network (ET Net) is a free, easy to use, on-line computer conferencing system organized and funded by the National Library of Medicine that is accessible via the SprintNet (SprintNet, Reston, VA) and Internet (Merit, Ann Arbor, MI) computer networks. It is dedicated to helping bring together, in a single continuously running electronic forum, developers and users of computer applications in the health sciences, including radiology. ET Net uses the Caucus computer conferencing software (Camber-Roth, Troy, NY) running on a microcomputer. This microcomputer is located in the National Library of Medicine's Lister Hill National Center for Biomedical Communications and is directly connected to the SprintNet and the Internet networks. The advanced computer conferencing software of ET Net allows individuals who are separated in space and time to unite electronically to participate, at any time, in interactive discussions on applications of computers in radiology. A computer conferencing system such as ET Net allows radiologists to maintain contact with colleagues on a regular basis when they are not physically together. Topics of discussion on ET Net encompass all applications of computers in radiological practice, research, and education. ET Net has been in successful operation for 3 years and has a promising future aiding radiologists in the exchange of information pertaining to applications of computers in radiology.

  6. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  7. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  8. A data acquisition protocol for a reactive wireless sensor network monitoring application.

    Science.gov (United States)

    Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K

    2015-04-30

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.

  9. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  10. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  11. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    International Nuclear Information System (INIS)

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements

  12. Mergers + acquisitions.

    Science.gov (United States)

    Hoppszallern, Suzanna

    2002-05-01

    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  13. State of the Art of Network Security Perspectives in Cloud Computing

    Science.gov (United States)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  14. Development of a Data Acquisition System for the BaBar CP Violation Experiment

    International Nuclear Information System (INIS)

    Claus, Richard

    1999-01-01

    Experiences developing data acquisition system for the BaBar CP violation experiment located at the Stanford Linear Accelerator Center are presented. The BaBar detector consists of multiple independent subdetectors joined with a data acquisition system consisting of a large number of embedded PowerPC single board computers residing in VME crates. The data acquisition software is layered on the VxWorks real-time operating system. It is partitionable to allow subsystems (as well as test stands) to operate independently. Data is assimilated into events through a combination of shared memory and a high performance network. This system presents data to a UNIX farm via a high speed non-blocking ethernet switch at a rate of 2 KHz. Topics such as bootstrapping and loading 200 processors, NFS file access for these processors and software development and deployment are discussed

  15. Development of a Data Acquisition System for the BaBar CP Violation Experiment

    CERN Document Server

    Scott, I; Grosso, P; Hamilton, R T; Huffer, M E; O'Grady, C P; Russell, J J

    1999-01-01

    Experiences developing data acquisition system for the BaBar CP violation experiment located at the Stanford Linear Accelerator Center are presented. The BaBar detector consists of multiple independent subdetectors joined with a data acquisition system consisting of a large number of embedded PowerPC single board computers residing in VME crates. The data acquisition software is layered on the VxWorks real-time operating system. It is partitionable to allow subsystems (as well as test stands) to operate independently. Data is assimilated into events through a combination of shared memory and a high performance network. This system presents data to a UNIX farm via a high speed non-blocking ethernet switch at a rate of 2 KHz. Topics such as bootstrapping and loading 200 processors, NFS file access for these processors and software development and deployment are discussed.

  16. Main concept of local area network protection on the basis of the SAAM 'TRAFFIC'

    International Nuclear Information System (INIS)

    Vasil'ev, P.M.; Kryukov, Yu.A.; Kuptsov, S.I.; Ivanov, V.V.; Koren'kov, V.V.

    2002-01-01

    In our previous paper we developed a system for acquisition, analysis and management of the network traffic (SAAM 'Traffic') for a segment of the JINR local area computer network (JINR LAN). In our present work we consider well-known scenarios of attacks on local area networks and propose protection methods based on the SAAM 'Traffic'. Although the system for LAN protection is installed on a router computer, it is not analogous to the firewall scheme and, thus, it does not hinder the performance of distributed network applications. This provides a possibility to apply such an approach to GRID-technologies, where network protection on the firewall basis can not be basically used. (author)

  17. SCADA System for the Modeling and Optimization of Oil Collecting Pipeline Network: A Case Study of Hassi Messaoud Oilfield

    OpenAIRE

    M. Aouadj; F. Naceri; M. Touileb; D. Sellami; M. Boukhatem

    2015-01-01

    This study aims are data acquisition, control and online modeling of an oil collection pipeline network using a SCADA «Supervisory Control and Data Acquisition» system, allowing the optimization of this network in real time by creating more exact models of onsite facilities. Indeed, fast development of computing systems makes obsolete usage of old systems for which maintenance became more and more expensive and their performances don’t comply any more with modern company operations. SCADA sys...

  18. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    Science.gov (United States)

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  19. Test experience on an ultrareliable computer communication network

    Science.gov (United States)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  20. Changing of the ELAN data acquisition to an integrated system with VME frontend acquisition and VAX work station analysis

    International Nuclear Information System (INIS)

    Foerster, W.

    1991-07-01

    A new data acquisition system for the experiment ELAN at the electron stretcher accelerator ELSA had become necessary due to changes in the experimental setup. The data acquisition and analysis which formerly both were performed by a single computer system are now separately done by a VMEbus-Computer and a VAX-Workstation. Based on the software components MECDAS (Mainz Experiment Control and Data Acquisition System) and GOOSY (GSI Online Offline System) a powerfull tool for data acquisition and analysis has been adapted to the requirements of the ELAN experiment. (orig.) [de

  1. Cloud Computing Services for Seismic Networks

    Science.gov (United States)

    Olson, Michael

    This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN---the Community Seismic Network---which uses relatively low-cost sensors deployed by members of the community, and (2) SAF---the Situation Awareness Framework---which integrates data from multiple sources, including the CSN, CISN---the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California---and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

  2. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  3. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    Science.gov (United States)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  4. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  5. AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

    OpenAIRE

    Mrs. Amandeep Kaur

    2011-01-01

    This paper highlights some of the basic concepts of QoS. The major research areas of Quality of Service Computer Networks are highlighted. The paper also compares some of the current QoS Routing techniques.

  6. Computer network data communication controller for the Plutonium Protection System (PPS)

    International Nuclear Information System (INIS)

    Rogers, M.S.

    1978-10-01

    Systems which employ several computers for distributed processing must provide communication links between the computers to effectively utilize their capacity. The technique of using a central network controller to supervise and route messages on a multicomputer digital communications net has certain economic and performance advantages over alternative implementations. Conceptually, the number of stations (computers) which can be accommodated by such a controller is unlimited, but practical considerations dictate a maximum of about 12 to 15. A Data Network Controller (DNC) has been designed around a M6800 microprocessor for use in the Plutonium Protection System (PPS) demonstration facilities

  7. Proceedings (slides, posters) of the 7. IAEA Technical Meeting on Control, Data Acquisition, and Remote Participation for Fusion Research

    International Nuclear Information System (INIS)

    2009-01-01

    The main objective of this meeting is to present and discuss new developments and perspectives in the areas of control, data acquisition and remote participation for nuclear research around the world. The following topics have been covered: 1) plasma control, 2) machine control, monitoring, safety and remote manipulation, 3) data acquisition and signal processing, 4) database techniques for information storage and retrieval, 5) advanced computing and massive data analysis, 6) remote participation and virtual laboratory, 7) fast network technology and its application, and 8) ITER

  8. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    Science.gov (United States)

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  9. IP Network Design and Implementation for the Caltech-USGS Element of TriNet

    Science.gov (United States)

    Johnson, M. L.; Busby, R.; Watkins, M.; Schwarz, S.; Hauksson, E.

    2001-12-01

    The new seismic network IP numbering scheme for the Caltech-USGS element of TriNet is designed to provide emergency response plans for computer outages and/or telemetry circuit failures so that data acquisition may continue with minimal interruption. IP numbers from the seismic stations through the Caltech acquisition machines are numbered using private, non-routable IP addresses, which allows the network administrator to create redundancy in the network design, more freedom in choosing IP numbers, and uniformity in the LAN and WAN network addressing. The network scheme used by the Caltech-USGS element of TriNet is designed to create redundancy and load sharing over three or more T1 circuits. A T1 circuit can support 80 dataloggers sending data at a design rate of 19.2 kbps or 120 dataloggers transmitting at a nominal rate of 12.8 kbps. During a circuit detour, the 80 dataloggers on the failed T1 are equally divided between the remaining two circuits. This increases the loads on the remaining two circuits to 120 dataloggers, which is the maximum load each T1 can handle at the nominal rate. Each T1 circuit has a router interface onto a LAN at Caltech with an independent subnet address. Some devices, such as Solaris computers, allow a single interface to be numbered with several IP addresses, a so called "multinetted" interface. This allows the central acquisition computers to appear with distinct addresses that are routable via different T1 circuits, but simplifies the physical cables between devices. We identify these T1 circuits as T1-1, T1-2, and T1-3. At the remote end, each Frame Relay Access Device (FRAD) and connected datalogger(s) is a subnetted LAN. The numbering is arranged so the second octet in the LAN IP address of the FRAD and datalogger identify the datalogger's primary and alternate T1 circuits. For example; a LAN with an IP address of 10.12.0.0/24 has T1-1 as its primary T1, and T1-2 as its alternate circuit. Stations with this number scheme are

  10. Computer network prepared to handle massive data flow

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an internationl network of computer centers, including one operated jointly by the University of Chicago and Indiana University." (2 pages)

  11. Embedded, everywhere: a research agenda for networked systems of embedded computers

    National Research Council Canada - National Science Library

    Committee on Networked Systems of Embedded Computers; National Research Council Staff; Division on Engineering and Physical Sciences; Computer Science and Telecommunications Board; National Academy of Sciences

    2001-01-01

    .... Embedded, Everywhere explores the potential of networked systems of embedded computers and the research challenges arising from embedding computation and communications technology into a wide variety of applicationsâ...

  12. KENS data acquisition system KENSnet

    International Nuclear Information System (INIS)

    Arai, Masatoshi; Furusaka, Michihiro; Satoh, Setsuo; Johnson, M.W.

    1988-01-01

    The installation of a new data acquisition system KENSnet has been completed at the KENS neutron facility. For data collection, 160 Mbytes are necessary for temporary disk storage, and 1 MIPS of CPU is required. For the computing system, models were chosen from the VAX family of computers running their proprietary operating system VMS. The VMS operating system has a very user friendly interface, and is well suited to instrument control applications. New data acquisition electronics were developed. A gate module receives a signal of proton extraction time from the accelerator, and checks the veto signals from the sample environment equipment (vacuum, temperature, chopper phasing, etc.). Then the signal is issued to a delay-time module. A time-control module starts timing from the delayed start signal from the delay-time module, and distributes an encoded time-boundary address to memory modules at the preset times anabling the memory modules to accumulate data histograms. The data acquisition control program (ICP) and the general data analysis program (Genie) were both developed at ISIS, and have been installed in the new data acquisition system. They give the experimenter 'user-friendly' data acquisition and a good environment for data manipulation. The ICP controls the DAE and transfers the histogram data into the computers. (N.K.)

  13. AcquisitionFootprintAttenuationDrivenbySeismicAttributes

    Directory of Open Access Journals (Sweden)

    Cuellar-Urbano Mayra

    2014-04-01

    Full Text Available Acquisition footprint, one of the major problems that PEMEX faces in seismic imaging, is noise highly correlated to the geometric array of sources and receivers used for onshore and offshore seismic acquisitions. It prevails in spite of measures taken during acquisition and data processing. This pattern, throughout the image, is easily confused with geological features and misguides seismic attribute computation. In this work, we use seismic data from PEMEX Exploración y Producción to show the conditioning process for removing random and coherent noise using linear filters. Geometric attributes used in a workflow were computed for obtaining an acquisition footprint noise model and adaptively subtract it from the seismic data.

  14. Distinguishing humans from computers in the game of go: A complex network approach

    Science.gov (United States)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  15. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 47: The value of computer networks in aerospace

    Science.gov (United States)

    Bishop, Ann Peterson; Pinelli, Thomas E.

    1995-01-01

    This paper presents data on the value of computer networks that were obtained from a national survey of 2000 aerospace engineers that was conducted in 1993. Survey respondents reported the extent to which they used computer networks in their work and communication and offered their assessments of the value of various network types and applications. They also provided information about the positive impacts of networks on their work, which presents another perspective on value. Finally, aerospace engineers' recommendations on network implementation present suggestions for increasing the value of computer networks within aerospace organizations.

  16. The Niwot Ridge Subalpine Forest US-NR1 AmeriFlux site - Part 1: Data acquisition and site record-keeping

    Science.gov (United States)

    Burns, Sean P.; Maclean, Gordon D.; Blanken, Peter D.; Oncley, Steven P.; Semmer, Steven R.; Monson, Russell K.

    2016-09-01

    The Niwot Ridge Subalpine Forest AmeriFlux site (US-NR1) has been measuring eddy-covariance ecosystem fluxes of carbon dioxide, heat, and water vapor since 1 November 1998. Throughout this 17-year period there have been changes to the instrumentation and improvements to the data acquisition system. Here, in Part 1 of this three-part series of papers, we describe the hardware and software used for data-collection and metadata documentation. We made changes to the data acquisition system that aimed to reduce the system complexity, increase redundancy, and be as independent as possible from any network outages. Changes to facilitate these improvements were (1) switching to a PC/104-based computer running the National Center for Atmospheric Research (NCAR) In-Situ Data Acquisition Software (NIDAS) that saves the high-frequency data locally and over the network, and (2) time-tagging individual 10 Hz serial data samples using network time protocol (NTP) coupled to a GPS-based clock, providing a network-independent, accurate time base. Since making these improvements almost 2 years ago, the successful capture of high-rate data has been better than 99.98 %. We also provide philosophical concepts that shaped our design of the data system and are applicable to many different types of environmental data collection.

  17. Computer-Supported Modelling of Multi modal Transportation Networks Rationalization

    Directory of Open Access Journals (Sweden)

    Ratko Zelenika

    2007-09-01

    Full Text Available This paper deals with issues of shaping and functioning ofcomputer programs in the modelling and solving of multimoda Itransportation network problems. A methodology of an integrateduse of a programming language for mathematical modellingis defined, as well as spreadsheets for the solving of complexmultimodal transportation network problems. The papercontains a comparison of the partial and integral methods ofsolving multimodal transportation networks. The basic hypothesisset forth in this paper is that the integral method results inbetter multimodal transportation network rationalization effects,whereas a multimodal transportation network modelbased on the integral method, once built, can be used as the basisfor all kinds of transportation problems within multimodaltransport. As opposed to linear transport problems, multimodaltransport network can assume very complex shapes. This papercontains a comparison of the partial and integral approach totransp01tation network solving. In the partial approach, astraightforward model of a transp01tation network, which canbe solved through the use of the Solver computer tool within theExcel spreadsheet inteiface, is quite sufficient. In the solving ofa multimodal transportation problem through the integralmethod, it is necessmy to apply sophisticated mathematicalmodelling programming languages which supp01t the use ofcomplex matrix functions and the processing of a vast amountof variables and limitations. The LINGO programming languageis more abstract than the Excel spreadsheet, and it requiresa certain programming knowledge. The definition andpresentation of a problem logic within Excel, in a manner whichis acceptable to computer software, is an ideal basis for modellingin the LINGO programming language, as well as a fasterand more effective implementation of the mathematical model.This paper provides proof for the fact that it is more rational tosolve the problem of multimodal transportation networks by

  18. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  19. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  20. The ENSDF radioactivity data base for IBM-PC and computer network access

    International Nuclear Information System (INIS)

    Ekstroem, P.; Spanier, L.

    1989-08-01

    A data base system for radioactivity gamma rays is described. A base with approximately 15000 gamma rays from 2777 decays is available for installation on the hard disk of a PC, and a complete system with approximately 73000 gamma rays is available for on-line access via the NORDic University computer NETwork (NORDUNET) and the Swedish University computer NETwork (SUNET)

  1. Emerging trends in neuro engineering and neural computation

    CERN Document Server

    Lee, Kendall; Garmestani, Hamid; Lim, Chee

    2017-01-01

    This book focuses on neuro-engineering and neural computing, a multi-disciplinary field of research attracting considerable attention from engineers, neuroscientists, microbiologists and material scientists. It explores a range of topics concerning the design and development of innovative neural and brain interfacing technologies, as well as novel information acquisition and processing algorithms to make sense of the acquired data. The book also highlights emerging trends and advances regarding the applications of neuro-engineering in real-world scenarios, such as neural prostheses, diagnosis of neural degenerative diseases, deep brain stimulation, biosensors, real neural network-inspired artificial neural networks (ANNs) and the predictive modeling of information flows in neuronal networks. The book is broadly divided into three main sections including: current trends in technological developments, neural computation techniques to make sense of the neural behavioral data, and application of these technologie...

  2. Evaluation of a modified Fitts law brain-computer interface target acquisition task in able and motor disabled individuals

    Science.gov (United States)

    Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.

    2009-10-01

    A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.

  3. Control and Data Acquisition System for the Spanish Beamline (BM25) at the ESRF

    International Nuclear Information System (INIS)

    Pereira Gonzalez, A.; Olalla Garcia, C.; Sanchez Sanz, J.; Castro, G. R.

    2005-01-01

    A new control and data acquisition system has been developed for BM25 Spanish Line at the ESRF. The system is based in VMEbus, Motorola PreP architecture and Linux Operating System and it's linked to a local ETHERNET network which provides the way of communicate with the servers (PC workstations). In these computers, the data are available for general usage in order to analyze them. The data acquisition consists of many channels connected to the VME crates mainly, independent between them, and fully programmable by drivers, CLUI's and GUI's interfaces, and a set of independent systems (embedded ones, PLCs, others) controlling the security aspects. This report is described in terms of their architecture, their electronic system to the process hard ware and the functionality and the application development facilities they provide using the software and the data acquisition. (Author) 18 refs

  4. Computational Analysis of Molecular Interaction Networks Underlying Change of HIV-1 Resistance to Selected Reverse Transcriptase Inhibitors.

    Science.gov (United States)

    Kierczak, Marcin; Dramiński, Michał; Koronacki, Jacek; Komorowski, Jan

    2010-12-12

    Despite more than two decades of research, HIV resistance to drugs remains a serious obstacle in developing efficient AIDS treatments. Several computational methods have been developed to predict resistance level from the sequence of viral proteins such as reverse transcriptase (RT) or protease. These methods, while powerful and accurate, give very little insight into the molecular interactions that underly acquisition of drug resistance/hypersusceptibility. Here, we attempt at filling this gap by using our Monte Carlo feature selection and interdependency discovery method (MCFS-ID) to elucidate molecular interaction networks that characterize viral strains with altered drug resistance levels. We analyzed a number of HIV-1 RT sequences annotated with drug resistance level using the MCFS-ID method. This let us expound interdependency networks that characterize change of drug resistance to six selected RT inhibitors: Abacavir, Lamivudine, Stavudine, Zidovudine, Tenofovir and Nevirapine. The networks consider interdependencies at the level of physicochemical properties of mutating amino acids, eg,: polarity. We mapped each network on the 3D structure of RT in attempt to understand the molecular meaning of interacting pairs. The discovered interactions describe several known drug resistance mechanisms and, importantly, some previously unidentified ones. Our approach can be easily applied to a whole range of problems from the domain of protein engineering. A portable Java implementation of our MCFS-ID method is freely available for academic users and can be obtained at: http://www.ipipan.eu/staff/m.draminski/software.htm.

  5. Proceedings of the conference on computing in high energy physics '94

    International Nuclear Information System (INIS)

    Loken, S.C.

    1994-01-01

    This report contains papers on the following topics: Triggering, data acquisition, online and control systems; hardware architectures; networks and interconnections; data handling and storage systems; software methodologies, languages and tools; user interfaces and visualization; computation; and information systems and multimedia. The individual papers have been indexed and cataloged elsewhere

  6. Synthetic tetracycline-inducible regulatory networks: computer-aided design of dynamic phenotypes

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2007-01-01

    Full Text Available Abstract Background Tightly regulated gene networks, precisely controlling the expression of protein molecules, have received considerable interest by the biomedical community due to their promising applications. Among the most well studied inducible transcription systems are the tetracycline regulatory expression systems based on the tetracycline resistance operon of Escherichia coli, Tet-Off (tTA and Tet-On (rtTA. Despite their initial success and improved designs, limitations still persist, such as low inducer sensitivity. Instead of looking at these networks statically, and simply changing or mutating the promoter and operator regions with trial and error, a systematic investigation of the dynamic behavior of the network can result in rational design of regulatory gene expression systems. Sophisticated algorithms can accurately capture the dynamical behavior of gene networks. With computer aided design, we aim to improve the synthesis of regulatory networks and propose new designs that enable tighter control of expression. Results In this paper we engineer novel networks by recombining existing genes or part of genes. We synthesize four novel regulatory networks based on the Tet-Off and Tet-On systems. We model all the known individual biomolecular interactions involved in transcription, translation, regulation and induction. With multiple time-scale stochastic-discrete and stochastic-continuous models we accurately capture the transient and steady state dynamics of these networks. Important biomolecular interactions are identified and the strength of the interactions engineered to satisfy design criteria. A set of clear design rules is developed and appropriate mutants of regulatory proteins and operator sites are proposed. Conclusion The complexity of biomolecular interactions is accurately captured through computer simulations. Computer simulations allow us to look into the molecular level, portray the dynamic behavior of gene regulatory

  7. The FINUDA data acquisition system

    International Nuclear Information System (INIS)

    Cerello, P.; Marcello, S.; Filippini, V.; Fiore, L.; Gianotti, P.; Raimondo, A.

    1996-07-01

    A parallel scalable Data Acquisition System, based on VME, has been developed to be used in the FINUDA experiment, scheduled to run at the DAPHNE machine at Frascati starting from 1997. The acquisition software runs on embedded RTPC 8067 processors using the LynxOS operating system. The readout of event fragments is coordinated by a suitable trigger Supervisor. data read by different controllers are transported via dedicated bus to a Global Event Builder running on a UNIX machine. Commands from and to VME processors are sent via socket based network protocols. The network hardware is presently ethernet, but it can easily changed to optical fiber

  8. Paralelno umrežavanje računara / Parallel networking of the computers

    Directory of Open Access Journals (Sweden)

    Milojko Jevtović

    2007-04-01

    Full Text Available U radu je izložena originalna koncepcija tehničkog rešenja paralelnog umrežavanja računara, kao i lokalnih računarskih mreža (LAN - Local Area Network, odnosno povezivanje i istovremena komunikacija preko više različitih transportnih telekomunikacionih mreža. Opisano je jedno rešenje paralelnog umrežavanja, kojim je omogućen pouzdani prenos multimedijalnog saobraćaja i prenos podataka u realnom vremenu između računara ili LAN istovremeno preko N (N = 1, 2, 3, 4,.. različitih, međusobno nezavisnih mreža širokog prostranstva (WAN - Wide Area Network. Paralelno umrežavanje zasnovano je na korišćenju univerzalnog modema, čije je rešenje, takođe ukratko predstavljeno. / In this paper, new concept for parallel networking of the computers or LANs over different WAN telecommunications networks, is presented. One solution of the parallel networks, which enables reliable transfer of multimedia traffic and data transmission in real time between a computer of LAN via N (N = 1, 2 3, 4,… different inter-connected Wide Area Network. Connections between computers or LANs and wide area networks are realized using universal modems whose solution has also been presented.

  9. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  10. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  11. 1993 CERN school of computing. Proceedings

    International Nuclear Information System (INIS)

    Vandoni, C.E.; Verkerk, C.

    1994-01-01

    These proceedings contain the majority of the lectures given at the 1993 CERN School of Computing. Artificial neural networks were treated with particular emphasis on applications in particle physics. A discussion of triggering for experiments at the proposed LHC machine provided a direct connection to data acquisition in this field, whereas another aspect of signal processing was seen in the description of a gravitational wave interferometer. Some of the more general aspects of data handling covered included parallel processing, te IEEE mass storage system, and the use of object stores of events. Lectures on broadband telecommunications networks and asynchronous transfer mode described some recent developments in communications. The analysis and visualization of the data were discussed in the talks on general-purpose portable software tools (PAW++, KUIP and PIAF) and on the uses of computer animation and virtual reality to this end. Lectures on open software discussed operating systems and distributed computing, and the evolution of products like Unix, NT and DCE. (orig.) e

  12. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    Science.gov (United States)

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  13. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  14. Main control computer security model of closed network systems protection against cyber attacks

    Science.gov (United States)

    Seymen, Bilal

    2014-06-01

    The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.

  15. Endoleak detection using single-acquisition split-bolus dual-energy computer tomography (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Javor, D.; Wressnegger, A.; Unterhumer, S.; Kollndorfer, K.; Nolz, R.; Beitzke, D.; Loewe, C. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria)

    2017-04-15

    To assess a single-phase, dual-energy computed tomography (DECT) with a split-bolus technique and reconstruction of virtual non-enhanced images for the detection of endoleaks after endovascular aneurysm repair (EVAR). Fifty patients referred for routine follow-up post-EVAR CT and a history of at least one post-EVAR follow-up CT examination using our standard biphasic (arterial and venous phase) routine protocol (which was used as the reference standard) were included in this prospective trial. An in-patient comparison and an analysis of the split-bolus protocol and the previously used double-phase protocol were performed with regard to differences in diagnostic accuracy, radiation dose, and image quality. The analysis showed a significant reduction of radiation dose of up to 42 %, using the single-acquisition split-bolus protocol, while maintaining a comparable diagnostic accuracy (primary endoleak detection rate of 96 %). Image quality between the two protocols was comparable and only slightly inferior for the split-bolus scan (2.5 vs. 2.4). Using the single-acquisition, split-bolus approach allows for a significant dose reduction while maintaining high image quality, resulting in effective endoleak identification. (orig.)

  16. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    Science.gov (United States)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  17. Innovations and advances in computing, informatics, systems sciences, networking and engineering

    CERN Document Server

    Elleithy, Khaled

    2015-01-01

    Innovations and Advances in Computing, Informatics, Systems Sciences, Networking and Engineering  This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers from the conference proceedings of the Eighth and some selected papers of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2012 & CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  ·       Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; ·       Includes chapters in the most a...

  18. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    OpenAIRE

    Ronnie Cheung; Calvin Wan

    2011-01-01

    We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer l...

  19. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  20. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    Science.gov (United States)

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  1. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  2. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  3. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  4. Traffic Dynamics of Computer Networks

    Science.gov (United States)

    Fekete, Attila

    2008-10-01

    Two important aspects of the Internet, namely the properties of its topology and the characteristics of its data traffic, have attracted growing attention of the physics community. My thesis has considered problems of both aspects. First I studied the stochastic behavior of TCP, the primary algorithm governing traffic in the current Internet, in an elementary network scenario consisting of a standalone infinite-sized buffer and an access link. The effect of the fast recovery and fast retransmission (FR/FR) algorithms is also considered. I showed that my model can be extended further to involve the effect of link propagation delay, characteristic of WAN. I continued my thesis with the investigation of finite-sized semi-bottleneck buffers, where packets can be dropped not only at the link, but also at the buffer. I demonstrated that the behavior of the system depends only on a certain combination of the parameters. Moreover, an analytic formula was derived that gives the ratio of packet loss rate at the buffer to the total packet loss rate. This formula makes it possible to treat buffer-losses as if they were link-losses. Finally, I studied computer networks from a structural perspective. I demonstrated through fluid simulations that the distribution of resources, specifically the link bandwidth, has a serious impact on the global performance of the network. Then I analyzed the distribution of edge betweenness in a growing scale-free tree under the condition that a local property, the in-degree of the "younger" node of an arbitrary edge, is known in order to find an optimum distribution of link capacity. The derived formula is exact even for finite-sized networks. I also calculated the conditional expectation of edge betweenness, rescaled for infinite networks.

  5. Speckle interferometry. Data acquisition and control for the SPID instrument.

    Science.gov (United States)

    Altarac, S.; Tallon, M.; Thiebaut, E.; Foy, R.

    1998-08-01

    SPID (SPeckle Imaging by Deconvolution) is a new speckle camera currently under construction at CRAL-Observatoire de Lyon. Its high spectral resolution and high image restoration capabilities open new astrophysical programs. The instrument SPID is composed of four main optical modules which are fully automated and computer controlled by a software written in Tcl/Tk/Tix and C. This software provides an intelligent assistance to the user by choosing observational parameters as a function of atmospheric parameters, computed in real time, and the desired restored image quality. Data acquisition is made by a photon-counting detector (CP40). A VME-based computer under OS9 controls the detector and stocks the data. The intelligent system runs under Linux on a PC. A slave PC under DOS commands the motors. These 3 computers communicate through an Ethernet network. SPID can be considered as a precursor for VLT's (Very Large Telescope, four 8-meter telescopes currently built in Chile by European Southern Observatory) very high spatial resolution camera.

  6. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  7. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  8. Proceedings of the conference on computing in high energy physics `94

    Energy Technology Data Exchange (ETDEWEB)

    Loken, S. C. [ed.

    1994-04-01

    This report contains papers on the following topics: Triggering, data acquisition, online and control systems; hardware architectures; networks and interconnections; data handling and storage systems; software methodologies, languages and tools; user interfaces and visualization; computation; and information systems and multimedia. The individual papers have been indexed and cataloged for the database.

  9. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...... will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need...

  10. Operations and maintenance manual for the LDUA supervisory control and data acquisition system (LDUA System 4200) and control network (LDUA System 4400)

    International Nuclear Information System (INIS)

    Barnes, G.A.

    1998-01-01

    This document defines the requirements applicable to the operation, maintenance and storage of the Supervisory Control and Data Acquisition System (SCADAS) and Control Network in support of the Light Duty Utility Arm (LDUA) operations

  11. The integration of automated knowledge acquisition with computer-aided software engineering for space shuttle expert systems

    Science.gov (United States)

    Modesitt, Kenneth L.

    1990-01-01

    A prediction was made that the terms expert systems and knowledge acquisition would begin to disappear over the next several years. This is not because they are falling into disuse; it is rather that practitioners are realizing that they are valuable adjuncts to software engineering, in terms of problem domains addressed, user acceptance, and in development methodologies. A specific problem was discussed, that of constructing an automated test analysis system for the Space Shuttle Main Engine. In this domain, knowledge acquisition was part of requirements systems analysis, and was performed with the aid of a powerful inductive ESBT in conjunction with a computer aided software engineering (CASE) tool. The original prediction is not a very risky one -- it has already been accomplished.

  12. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  13. Correlation between Academic and Skills-Based Tests in Computer Networks

    Science.gov (United States)

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  14. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  15. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  16. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  17. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  18. Computer Network Attack Versus Operational Maneuver from the Sea

    National Research Council Canada - National Science Library

    Herdegen, Dale

    2000-01-01

    ...) vulnerable to computer network attack (CNA). Mission command and control can reduce the impact of the loss of command and control, but it can not overcome the vast and complex array of threats...

  19. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  20. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    Science.gov (United States)

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  1. Personal computer local networks report

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. Since the first microcomputer local networks of the late 1970's and early 80's, personal computer LANs have expanded in popularity, especially since the introduction of IBMs first PC in 1981. The late 1980s has seen a maturing in the industry with only a few vendors maintaining a large share of the market. This report is intended to give the reader a thorough understanding of the technology used to build these systems ... from cable to chips ... to ... protocols to servers. The report also fully defines PC LANs and the marketplace, with in-

  2. Decentralized Blended Acquisition

    NARCIS (Netherlands)

    Berkhout, A.J.

    2013-01-01

    The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we

  3. Object-oriented designs for LHD data acquisitions using client-server model

    International Nuclear Information System (INIS)

    Kojima, M.; Nakanishi, H.; Hidekuma, S.

    1999-01-01

    The LHD data acquisition system handles >600 MB data per shot. The fully distributed data processing and the object-oriented system design are the main principles of this system. Its wide flexibility has been realized by introducing the object-oriented method into the data processing, in which the object sharing and class libraries will provide the unified way of data handling for the network client-server programming. The object class libraries are described in C++, and the network object sharing is provided through the commercial software named HARNESS. As for the CAMAC setup, the Java script can use the C++ class libraries and thus establishes the relationship between the object-oriented database and the WWW server. In LHD experiments, the CAMAC system and the Windows NT operating system are applied for digitizing and acquiring data, respectively. For the purpose of the LHD data acquisition, the new CAMAC handling software on Windows NT have been developed to manipulate the SCSI-connected crate controllers. The CAMAC command lists and diagnostic data classes are shared between client and server computers. A lump of the diagnostic data can be treated as part of an object by the object-oriented programming. (orig.)

  4. THE IMPROVEMENT OF COMPUTER NETWORK PERFORMANCE WITH BANDWIDTH MANAGEMENT IN KEMURNIAN II SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Bayu Kanigoro

    2012-05-01

    Full Text Available This research describes the improvement of computer network performance with bandwidth management in Kemurnian II Senior High School. The main issue of this research is the absence of bandwidth division on computer, which makes user who is downloading data, the provided bandwidth will be absorbed by the user. It leads other users do not get the bandwidth. Besides that, it has been done IP address division on each room, such as computer, teacher and administration room for supporting learning process in Kemurnian II Senior High School, so wireless network is needed. The method is location observation and interview with related parties in Kemurnian II Senior High School, the network analysis has run and designed a new topology network including the wireless network along with its configuration and separation bandwidth on microtic router and its limitation. The result is network traffic on Kemurnian II Senior High School can be shared evenly to each user; IX and IIX traffic are separated, which improve the speed on network access at school and the implementation of wireless network.Keywords: Bandwidth Management; Wireless Network

  5. Development of a UNIX network compatible reactivity computer

    International Nuclear Information System (INIS)

    Sanchez, R.F.; Edwards, R.M.

    1996-01-01

    A state-of-the-art UNIX network compatible controller and UNIX host workstation with MATLAB/SIMULINK software were used to develop, implement, and validate a digital reactivity calculation. An objective of the development was to determine why a Macintosh-based reactivity computer reactivity output drifted intolerably

  6. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...

  7. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    Science.gov (United States)

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  8. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  9. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  10. An on-line data acquisition system based on Norsk-Data ND-560 computer

    International Nuclear Information System (INIS)

    Bandyopadhyay, A.; Roy, A.; Dey, S.K.; Bhattacharya, S.; Bhowmik, R.K.

    1987-01-01

    This paper describes a high-speed data acquisition system based on CAMAC for Norsk Data ND-560 computer operating in a multiuser environment. As opposed to the present trend, the system has been implemented with minimum hardware at CAMAC level taking advantage of the dual processors of ND-560. The package consists of several coordinated tasks running in the two CPUs which acquire data, record on tape, permit on-line analysis and display the data and perform related control operations. It has been used in several experiments at VECC and its performance in on-line experiments is reported. (orig.)

  11. Report on a field-portable VME-based distributed data acquisition system

    International Nuclear Information System (INIS)

    Drigert, M.W.; Cole, J.D.; Reber, E.L.; Young, J.M.

    1996-01-01

    A development effort was started two years ago to develop a portable data acquisition system which could be used for performing arms control verification and environmental monitoring measurements with complex multi-detector systems in the field. A field portable data acquisition system has been developed around a VMEbus based micro-processor and standard TCP/IP network protocols. The hardware consists of a compact VME crate and a single CAMAC crate containing the signal processing electronics. The component processes of the data acquisition system transfer control and event data over a set of TCP/IP socket connections. The use of network sockets for the interprocess communications allows the data acquisition system to be operated transparently on one workstation or on a number of workstations distributed around a local network

  12. The data acquisition system of ICT

    International Nuclear Information System (INIS)

    Gao Fuqiang; An Kang; Lu Hua; Cao Peng; Jiang Renqing; Gao Fubing

    2008-01-01

    The purpose of the design is to develop a data acquisition system which can be used to collect and transmit hundreds of channels of weak light signal data at the same time, so as to meet the need of industrial computer tomography. The system is composed of two parts, detection circuit and acquisition circuit. FPGA and 20 bit integral and conversion chips are the primary chips adopted in detection circuit, while the primary chips for acquisition circuit are FPGA and AMCCS5335. The problems, data jam-up and data drop were solved by using multilevel memorizer. A large number of experiments have proved that this system has very high precision and transmission reliability. The design has been applied in several industrial computer tomography machines produced by the industrial computer tomography research center of Chongqing University, and its effectiveness is well apprised. (authors)

  13. Zaštita računarskih mreža / Protection of computer networks

    Directory of Open Access Journals (Sweden)

    Milojko Jevtović

    2005-09-01

    Full Text Available U radu su obrađene metode napada, oblici ugrožavanja i vrste pretnji kojima su izložene računarske mreže, kao i moguće metode i tehnička rešenja za zaštitu mreža. Analizirani su efekti pretnji kojima mogu biti izložene računarske mreže i informacije koje se preko njih prenose. Opisana su određena tehnička rešenja koja obezbeđuju potreban nivo zaštite računarskih mreža, kao i mere za zaštitu informacija koje se preko njih prenose. Navedeni su standardi koji se odnose na metode i procedure kriptozaštite informacija u računarskim mrežama. U radu je naveden primer zaštite jedne lokalne računarske mreže. / In this paper different methods of attacks, threats and different forms of dangers to the computer networks are described. The possible models and technical solutions for networks protection are also given. The effects of threats directed to the computer networks and their information are analyzed certain technical solutions that provide necessary protection level of the computer networks as well as measures for information protection are also described. The standards for methods and security procedure for the information in computer networks are enlisted. There is also an example of protecting one local data network (in this paper.

  14. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  15. Data-acquisition system of the reversed field pinch device REPUTE-1

    International Nuclear Information System (INIS)

    Tsuzuki, N.; Aoki, H.; Shinohara, H.; Toyama, H.; Morikawa, J.

    1988-01-01

    The new, compact data-acquisition system of the reversed field pinch device, REPUTE-1, is reported. Its distinctive feature is high flexibility and easy handling. The interface between the computer and measurement devices is CAMAC. The computer and the CAMAC devices are connected to a CAMAC byte serial highway that transmits setup parameters and acquisition data. The computer carries out setup of CAMAC devices and data acquisition automatically by use of CAMAC parameters and the acquisition data base. The maintenance tools for the data base are also provided. The computer system, which consists of a ''TOSBAC DS-600,'' has been in operation for REPUTE-1 since 1985

  16. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  17. Electric vehicle data acquisition system

    DEFF Research Database (Denmark)

    Svendsen, Mathias; Winther-Jensen, Mads; Pedersen, Anders Bro

    2014-01-01

    and industrial applications, e.g. research in electric vehicle driving patterns, vehicle substitutability analysis and fleet management. The platform is based on a embedded computer running Linux, and features a high level of modularity and flexibility. The system operates independently of the make of the car......, by using the On-board Diagnostic port to identify car model and adapt its software accordingly. By utilizing on-board Global Navigation Satellite System, General Packet Radio Service, accelerometer, gyroscope and magnetometer, the system not only provides valuable data for research in the field of electric......A data acquisition system for electric vehicles is presented. The system connects to the On-board Diagnostic port of newer vehicles, and utilizes the in-vehicle sensor network, as well as auxiliary sensors, to gather data. Data is transmitted continuously to a central database for academic...

  18. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  19. Development of a tracer transport option for the NAPSAC fracture network computer code

    International Nuclear Information System (INIS)

    Herbert, A.W.

    1990-06-01

    The Napsac computer code predicts groundwater flow through fractured rock using a direct fracture network approach. This paper describes the development of a tracer transport algorithm for the NAPSAC code. A very efficient particle-following approach is used enabling tracer transport to be predicted through large fracture networks. The new algorithm is tested against three test examples. These demonstrations confirm the accuracy of the code for simple networks, where there is an analytical solution to the transport problem, and illustrates the use of the computer code on a more realistic problem. (author)

  20. Computational study of noise in a large signal transduction network

    Directory of Open Access Journals (Sweden)

    Ruohonen Keijo

    2011-06-01

    Full Text Available Abstract Background Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. Results We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. Conclusions We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies.

  1. Application of local computer networks in nuclear-physical experiments and technology

    International Nuclear Information System (INIS)

    Foteev, V.A.

    1986-01-01

    The bases of construction, comparative performance and potentialities of local computer networks with respect to their application in physical experiments are considered. The principle of operation of local networks is shown on the basis of the Ethernet network and the results of analysis of their operating performance are given. The examples of operating local networks in the area of nuclear-physics research and nuclear technology are presented as follows: networks of Japan Atomic Energy Research Institute, California University and Los Alamos National Laboratory, network realization according to the DECnet and Fast-bus programs, home network configurations of the USSR Academy of Sciences and JINR Neutron Physical Laboratory etc. It is shown that local networks allows significantly raise productivity in the sphere of data processing

  2. FY 1999 Blue Book: Computing, Information, and Communications: Networked Computing for the 21st Century

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — U.S.research and development R and D in computing, communications, and information technologies has enabled unprecedented scientific and engineering advances,...

  3. Cloud Computing Application of Personal Information's Security in Network Sales-channels

    OpenAIRE

    Sun Qiong; Min Liu; Shiming Pang

    2013-01-01

    With the promotion of Internet sales, the security of personal information to network users have become increasingly demanding. The existing network of sales channels has personal information security risks, vulnerable to hacker attacking. Taking full advantage of cloud security management strategy, cloud computing security management model is introduced to the network sale of personal information security applications, which is to solve the problem of information leakage. Then we proposed me...

  4. Service-oriented Software Defined Optical Networks for Cloud Computing

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Ji, Yuefeng

    2017-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.

  5. Including Internet insurance as part of a hospital computer network security plan.

    Science.gov (United States)

    Riccardi, Ken

    2002-01-01

    Cyber attacks on a hospital's computer network is a new crime to be reckoned with. Should your hospital consider internet insurance? The author explains this new phenomenon and presents a risk assessment for determining network vulnerabilities.

  6. A constructive logic for services and information flow in computer networks

    NARCIS (Netherlands)

    Borghuis, V.A.J.; Feijs, L.M.G.

    2000-01-01

    In this paper we introduce a typed -calculus in which computer networks can be formalized and directed at situations where the services available on the network are stationary, while the information can flow freely. For this calculus, an analogue of the ‘propositions-as-types ’interpretation of

  7. The Business Perspective of Cloud Computing: Actors, Roles, and Value Networks

    OpenAIRE

    Leimeister, Stefanie;Riedl, Christoph;Böhm, Markus;Krcmar, Helmut

    2014-01-01

    With the rise of a ubiquitous provision of computing resources over the past years, cloud computing has been established as a prominent research topic. Many researchers, however, focus exclusively on the technical aspects of cloud computing, thereby neglecting the business opportunities and potentials cloud computing can offer. Enabled through this technology, new market players and business value networks arise and break up the traditional value chain of service provision. The focus of this ...

  8. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    OpenAIRE

    Junping Dong; Qingyu Xiong; Junhao Wen; Peng Li

    2014-01-01

    Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functi...

  9. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  10. The status of computing and means of local and external networking at JINR

    Energy Technology Data Exchange (ETDEWEB)

    Dorokhin, A T; Shirikov, V P

    1996-12-31

    The goal of this report is to represent a view of the current state of computer support at JINR different physical researches. JINR network and its applications are considered. Trends of local networks and the connectivity with global networks are discussed. 3 refs.

  11. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  12. Lightgrid-an agile distributed computing architecture for Geant4

    International Nuclear Information System (INIS)

    Young, Jason; Perry, John O.; Jevremovic, Tatjana

    2010-01-01

    A light weight grid based computing architecture has been developed to accelerate Geant4 computations on a variety of network architectures. This new software is called LightGrid. LightGrid has a variety of features designed to overcome current limitations on other grid based computing platforms, more specifically, smaller network architectures. By focusing on smaller, local grids, LightGrid is able to simplify the grid computing process with minimal changes to existing Geant4 code. LightGrid allows for integration between Geant4 and MySQL, which both increases flexibility in the grid as well as provides a faster, reliable, and more portable method for accessing results than traditional data storage systems. This unique method of data acquisition allows for more fault tolerant runs as well as instant results from simulations as they occur. The performance increases brought along by using LightGrid allow simulation times to be decreased linearly. LightGrid also allows for pseudo-parallelization with minimal Geant4 code changes.

  13. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  14. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  15. Political rotations and cross-province acquisitions in China

    DEFF Research Database (Denmark)

    Muratova, Yulia; Arnoldi, Jakob; Chen, Xin

    2018-01-01

    The underdeveloped institutional framework and trade barriers between China’s provinces make cross-province acquisitions challenging. We explore how Chinese firms can mitigate this problem. Drawing on social network theory we propose that cross-province rotation of political leaders—a key element...... of the promotion system of political cadres in China—is a mechanism enabling growth through cross-province acquisitions. We conceptualize rotated leaders as brokers between two geographically dispersed networks. We contribute to the literature on the characteristics of Chinese social networks, the effect...... of political connections on firm strategy, and the impact of political rotations on firm growth in China’s provinces....

  16. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  17. Why do Reservoir Computing Networks Predict Chaotic Systems so Well?

    Science.gov (United States)

    Lu, Zhixin; Pathak, Jaideep; Girvan, Michelle; Hunt, Brian; Ott, Edward

    Recently a new type of artificial neural network, which is called a reservoir computing network (RCN), has been employed to predict the evolution of chaotic dynamical systems from measured data and without a priori knowledge of the governing equations of the system. The quality of these predictions has been found to be spectacularly good. Here, we present a dynamical-system-based theory for how RCN works. Basically a RCN is thought of as consisting of three parts, a randomly chosen input layer, a randomly chosen recurrent network (the reservoir), and an output layer. The advantage of the RCN framework is that training is done only on the linear output layer, making it computationally feasible for the reservoir dimensionality to be large. In this presentation, we address the underlying dynamical mechanisms of RCN function by employing the concepts of generalized synchronization and conditional Lyapunov exponents. Using this framework, we propose conditions on reservoir dynamics necessary for good prediction performance. By looking at the RCN from this dynamical systems point of view, we gain a deeper understanding of its surprising computational power, as well as insights on how to design a RCN. Supported by Army Research Office Grant Number W911NF1210101.

  18. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  19. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  20. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  1. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Data acquisition with the personal computer to the microwaves generator of the microtron MT-25

    International Nuclear Information System (INIS)

    Rivero Ramirez, D.; Benavides Benitez, J. I.; Quiles Latorre, F. J.; Pahor, J.; Ponikvar, D.; Lago, G.

    2000-01-01

    The following paper includes the description of the design, construction and completion of a data acquisition system. The system is destined to the sampling of the work parameters of the generator of microwaves of the Microtron-25 that will settle in the High Institute of Nuclear Sciences and Technology, Havana, Cuba. In order to guarantee the suitable operation of the system a monitor program in assembler language has been developed. This program allows the communication of the system with one personal computer through the interface RS-232, as well as executes the commands received through it. Also the development of a program of attention to the system from one personal computer using the methods of the virtual instrumentation is included in this paper

  3. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    Science.gov (United States)

    Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai

    2015-05-01

    An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  4. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  5. A new approach in development of data flow control and investigation system for computer networks

    International Nuclear Information System (INIS)

    Frolov, I.; Vaguine, A.; Silin, A.

    1992-01-01

    This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)

  6. Critical services in the LHC computing

    International Nuclear Information System (INIS)

    Sciaba, A

    2010-01-01

    The LHC experiments (ALICE, ATLAS, CMS and LHCb) rely for the data acquisition, processing, distribution, analysis and simulation on complex computing systems, running using a variety of services, provided by the experiments, the Worldwide LHC Computing Grid and the different computing centres. These services range from the most basic (network, batch systems, file systems) to the mass storage services or the Grid information system, up to the different workload management systems, data catalogues and data transfer tools, often internally developed in the collaborations. In this contribution we review the status of the services most critical to the experiments by quantitatively measuring their readiness with respect to the start of the LHC operations. Shortcomings are identified and common recommendations are offered.

  7. Modeling for the management of peak loads on a radiology image management network

    International Nuclear Information System (INIS)

    Dwyer, S.J.; Cox, G.G.; Templeton, A.W.; Cook, L.T.; Anderson, W.H.; Hensley, K.S.

    1987-01-01

    The design of a radiology image management network for a radiology department can now be assisted by a queueing model. The queueing model requires that the designers specify the following parameters: the number of tasks to be accomplished (acquisition of image data, transmission of data, archiving of data, displaying and manipulation of data, and generation of hard copies); the average times to complete each task; the patient scheduled arrival times; and the number/type of computer nodes interfaced to the network (acquisition nodes, interactive diagnostic display stations, archiving nodes, hard copy nodes, and gateways to hospital systems). The outcomes from the queuering model include mean throughput data rates and identified bottlenecks, and peak throughput data rates and identified bottlenecks. This exhibit presents the queueing model and illustrates its use in managing peak loads on an image management network

  8. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  9. Data acquisition for sensor systems

    CERN Document Server

    Taylor, H Rosemary

    1997-01-01

    'Data acquisition' is concerned with taking one or more analogue signals and converting them to digital form with sufficient accu­ racy and speed to be ready for processing by a computer. The increasing use of computers makes this an expanding field, and it is important that the conversion process is done correctly because information lost at this stage can never be regained, no matter how good the computation. The old saying - garbage in, garbage out - is very relevant to data acquisition, and so every part of the book contains a discussion of errors: where do they come from, how large are they, and what can be done to reduce them? The book aims to treat the data acquisition process in depth with less detailed chapters on the fundamental principles of measure­ ment, sensors and signal conditioning. There is also a chapter on software packages, which are becoming increasingly popular. This is such a rapidly changing topic that any review of available pro­ grams is bound to be out of date before the book re...

  10. Data-Acquisition Systems for Fusion Devices

    NARCIS (Netherlands)

    van Haren, P. C.; Oomens, N. A.

    1993-01-01

    During the last two decades, computerized data acquisition systems (DASs) have been applied at magnetic confinement fusion devices. Present-day data acquisition is done by means of distributed computer systems and transient recorders in CAMAC systems. The development of DASs has been technology

  11. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  12. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  13. Finding Multi-step Attacks in Computer Networks using Heuristic Search and Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  14. Data Acquisition Backbone Core DABC

    International Nuclear Information System (INIS)

    Adamczewski, J; Essel, H G; Kurz, N; Linev, S

    2008-01-01

    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. The Data Acquisition Backbone Core (DABC) is a software package currently under development for FAIR detector tests, readout components test, and data flow investigations. All kinds of data channels (front-end systems) are connected by program plug-ins into functional components of DABC like data input, combiner, scheduler, event builder, analysis and storage components. After detailed simulations real tests of event building over a switched network (InfiniBand clusters with up to 110 nodes) have been performed. With the DABC software more than 900 MByte/s input and output per node can be achieved meeting the most demanding requirements. The software is ready for the implementation of various test beds needed for the final design of data acquisition systems at FAIR. The development of key components is supported by the FutureDAQ project of the European Union (FP6 I3HP JRA1)

  15. Whole-brain functional connectivity during acquisition of novel grammar: Distinct functional networks depend on language learning abilities.

    Science.gov (United States)

    Kepinska, Olga; de Rover, Mischa; Caspers, Johanneke; Schiller, Niels O

    2017-03-01

    In an effort to advance the understanding of brain function and organisation accompanying second language learning, we investigate the neural substrates of novel grammar learning in a group of healthy adults, consisting of participants with high and average language analytical abilities (LAA). By means of an Independent Components Analysis, a data-driven approach to functional connectivity of the brain, the fMRI data collected during a grammar-learning task were decomposed into maps representing separate cognitive processes. These included the default mode, task-positive, working memory, visual, cerebellar and emotional networks. We further tested for differences within the components, representing individual differences between the High and Average LAA learners. We found high analytical abilities to be coupled with stronger contributions to the task-positive network from areas adjacent to bilateral Broca's region, stronger connectivity within the working memory network and within the emotional network. Average LAA participants displayed stronger engagement within the task-positive network from areas adjacent to the right-hemisphere homologue of Broca's region and typical to lower level processing (visual word recognition), and increased connectivity within the default mode network. The significance of each of the identified networks for the grammar learning process is presented next to a discussion on the established markers of inter-individual learners' differences. We conclude that in terms of functional connectivity, the engagement of brain's networks during grammar acquisition is coupled with one's language learning abilities. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  17. An embedded control and acquisition system for multichannel detectors

    International Nuclear Information System (INIS)

    Gori, L.; Tommasini, R.; Cautero, G.; Giuressi, D.; Barnaba, M.; Accardo, A.; Carrato, S.; Paolucci, G.

    1999-01-01

    We present a pulse counting multichannel data acquisition system, characterized by the high number of high speed acquisition channels, and by the modular, embedded system architecture. The former leads to very fast acquisitions and allows to obtain sequences of snapshots, for the study of time dependent phenomena. The latter, thanks to the integration of a CPU into the system, provides high computational capabilities, so that the interfacing with the user computer is very simple and user friendly. Moreover, the user computer is free from control and acquisition tasks. The system has been developed for one of the beamlines of the third generation synchrotron radiation sources ELETTRA, and because of the modular architecture can be useful in various other kinds of experiments, where parallel acquisition, high data rates, and user friendliness are required. First experimental results on a double pass hemispherical electron analyser provided with a 96 channel detector confirm the validity of the approach. (author)

  18. 2003 Conference for Computing in High Energy and Nuclear Physics

    International Nuclear Information System (INIS)

    Schalk, T.

    2003-01-01

    The conference was subdivided into the follow separate tracks. Electronic presentations and/or videos are provided on the main website link. Sessions: Plenary Talks and Panel Discussion; Grid Architecture, Infrastructure, and Grid Security; HENP Grid Applications, Testbeds, and Demonstrations; HENP Computing Systems and Infrastructure; Monitoring; High Performance Networking; Data Acquisition, Triggers and Controls; First Level Triggers and Trigger Hardware; Lattice Gauge Computing; HENP Software Architecture and Software Engineering; Data Management and Persistency; Data Analysis Environment and Visualization; Simulation and Modeling; and Collaboration Tools and Information Systems

  19. A Privacy-Preserving Framework for Collaborative Intrusion Detection Networks Through Fog Computing

    DEFF Research Database (Denmark)

    Wang, Yu; Xie, Lin; Li, Wenjuan

    2017-01-01

    Nowadays, cyber threats (e.g., intrusions) are distributed across various networks with the dispersed networking resources. Intrusion detection systems (IDSs) have already become an essential solution to defend against a large amount of attacks. With the development of cloud computing, a modern IDS...

  20. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  1. Effective Response to Attacks On Department of Defense Computer Networks

    National Research Council Canada - National Science Library

    Shaha, Patrick

    2001-01-01

    .... For the Commanders-in-Chief (CINCs), computer networking has proven especially useful in maintaining contact and sharing data with elements forward deployed as well as with host nation governments and agencies...

  2. Computer network security and cyber ethics

    CERN Document Server

    Kizza, Joseph Migga

    2014-01-01

    In its 4th edition, this book remains focused on increasing public awareness of the nature and motives of cyber vandalism and cybercriminals, the weaknesses inherent in cyberspace infrastructure, and the means available to protect ourselves and our society. This new edition aims to integrate security education and awareness with discussions of morality and ethics. The reader will gain an understanding of how the security of information in general and of computer networks in particular, on which our national critical infrastructure and, indeed, our lives depend, is based squarely on the individ

  3. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  4. Data acquisition in nuclear and particle physics

    International Nuclear Information System (INIS)

    Renk, B.

    1993-01-01

    An introduction to the methodics of the measurement data acquisition in nuclear and particle physics for students of physics as well as experimental physicists and engineers in research and industry. The contents are: Obtaining of measurement data, digitizing and triggers, memories and microprocessors, bus systems, communication and networks, and examples for data acquisition systems

  5. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  6. Data acquisition for experiments with multi-detector arrays

    Indian Academy of Sciences (India)

    Experiments with multi-detector arrays have special requirements and place higher demands on computer data acquisition systems. In this contribution we discuss data acquisition systems with special emphasis on multi-detector arrays and in particular we describe a new data acquisition system, AMPS which we have ...

  7. Three neural network based sensor systems for environmental monitoring

    International Nuclear Information System (INIS)

    Keller, P.E.; Kouzes, R.T.; Kangas, L.J.

    1994-05-01

    Compact, portable systems capable of quickly identifying contaminants in the field are of great importance when monitoring the environment. One of the missions of the Pacific Northwest Laboratory is to examine and develop new technologies for environmental restoration and waste management at the Hanford Site. In this paper, three prototype sensing systems are discussed. These prototypes are composed of sensing elements, data acquisition system, computer, and neural network implemented in software, and are capable of automatically identifying contaminants. The first system employs an array of tin-oxide gas sensors and is used to identify chemical vapors. The second system employs an array of optical sensors and is used to identify the composition of chemical dyes in liquids. The third system contains a portable gamma-ray spectrometer and is used to identify radioactive isotopes. In these systems, the neural network is used to identify the composition of the sensed contaminant. With a neural network, the intense computation takes place during the training process. Once the network is trained, operation consists of propagating the data through the network. Since the computation involved during operation consists of vector-matrix multiplication and application of look-up tables unknown samples can be rapidly identified in the field

  8. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    International Nuclear Information System (INIS)

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M o-dot , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above

  9. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Science.gov (United States)

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  10. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  11. NASF transposition network: A computing network for unscrambling p-ordered vectors

    Science.gov (United States)

    Lim, R. S.

    1979-01-01

    The viewpoints of design, programming, and application of the transportation network (TN) is presented. The TN is a programmable combinational logic network that connects 521 memory modules to 512 processors. The unscrambling of p-ordered vectors to 1-ordered vectors in one cycle is described. The TN design is based upon the concept of cyclic groups from abstract algebra and primitive roots and indices from number theory. The programming of the TN is very simple, requiring only 20 bits: 10 bits for offset control and 10 bits for barrel switch shift control. This simple control is executed by the control unit (CU), not the processors. Any memory access by a processor must be coordinated with the CU and wait for all other processors to come to a synchronization point. These wait and synchronization events can be a degradation in performance to a computation. The TN application is for multidimensional data manipulation, matrix processing, and data sorting, and can also perform a perfect shuffle. Unlike other more complicated and powerful permutation networks, the TN cannot, if possible at all, unscramble non-p-ordered vectors in one cycle.

  12. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  13. Near Theoretical Gigabit Link Efficiency for Distributed Data Acquisition Systems.

    Science.gov (United States)

    Abu-Nimeh, Faisal T; Choong, Woon-Seng

    2017-03-01

    Link efficiency, data integrity, and continuity for high-throughput and real-time systems is crucial. Most of these applications require specialized hardware and operating systems as well as extensive tuning in order to achieve high efficiency. Here, we present an implementation of gigabit Ethernet data streaming which can achieve 99.26% link efficiency while maintaining no packet losses. The design and implementation are built on OpenPET, an opensource data acquisition platform for nuclear medical imaging, where (a) a crate hosting multiple OpenPET detector boards uses a User Datagram Protocol over Internet Protocol (UDP/IP) Ethernet soft-core, that is capable of understanding PAUSE frames, to stream data out to a computer workstation; (b) the receiving computer uses Netmap to allow the processing software (i.e., user space), which is written in Python, to directly receive and manage the network card's ring buffers, bypassing the operating system kernel's networking stack; and (c) a multi-threaded application using synchronized queues is implemented in the processing software (Python) to free up the ring buffers as quickly as possible while preserving data integrity and flow continuity.

  14. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    Directory of Open Access Journals (Sweden)

    Bader Al-Anzi

    2015-05-01

    Full Text Available An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae. A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  15. Analog-to-digital clinical data collection on networked workstations with graphic user interface.

    Science.gov (United States)

    Lunt, D

    1991-02-01

    An innovative respiratory examination system has been developed that combines physiological response measurement, real-time graphic displays, user-driven operating sequences, and networked file archiving and review into a scientific research and clinical diagnosis tool. This newly constructed computer network is being used to enhance the research center's ability to perform patient pulmonary function examinations. Respiratory data are simultaneously acquired and graphically presented during patient breathing maneuvers and rapidly transformed into graphic and numeric reports, suitable for statistical analysis or database access. The environment consists of the hardware (Macintosh computer, MacADIOS converters, analog amplifiers), the software (HyperCard v2.0, HyperTalk, XCMDs), and the network (AppleTalk, fileservers, printers) as building blocks for data acquisition, analysis, editing, and storage. System operation modules include: Calibration, Examination, Reports, On-line Help Library, Graphic/Data Editing, and Network Storage.

  16. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  17. Application node system image manager subsystem within a distributed function laboratory computer system

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Beck, R.D.

    1978-10-01

    A computer system to control and acquire data from one x-ray diffraction, five neutron scattering, and four neutron diffraction experiments located at the Brookhaven National Laboratory High Flux Beam Reactor has operated in a routine manner for over three years. The computer system is configured as a network of computer processors with the processor interconnections assuming a star-like structure. At the points of the star are the ten experiment control-data acquisition computers, referred to as application nodes. At the center of the star is a shared service node which supplies a set of shared services utilized by all of the application nodes. A program development node occupies one additional point of the star. The design and implementation of a network subsystem to support development and execution of operating systems for the application nodes is described. 6 figures, 1 table

  18. NEPTUNIX 2: Operating on computers network - Catalogued procedures

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-06-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equations set F(x,x',1,t), where t is the independent variable, x' the derivative of x and 1 an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The numerical step, using results from a picture of the model translated in FORTRAN language, in a form fitted for the executive computer, carries out the simulmations; in this way, NEPTUNIX 2 numerical step is portable. On the opposite, the non numerical step must be executed on a series 370 IBM computer or on a compatible computer. The present manual describes NEPTUNIX 2 operating procedures when the two steps are executed on the same computer and also when the numerical step is executed on an other computer connected or not on the same computing network [fr

  19. A Human/Computer Learning Network to Improve Biodiversity Conservation and Research

    OpenAIRE

    Kelling, Steve; Gerbracht, Jeff; Fink, Daniel; Lagoze, Carl; Wong, Weng-Keen; Yu, Jun; Damoulas, Theodoros; Gomes, Carla

    2012-01-01

    In this paper we describe eBird, a citizen-science project that takes advantage of the human observational capacity to identify birds to species, which is then used to accurately represent patterns of bird occurrences across broad spatial and temporal extents. eBird employs artificial intelligence techniques such as machine learning to improve data quality by taking advantage of the synergies between human computation and mechanical computation. We call this a Human-Computer Learning Network,...

  20. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation.

    Science.gov (United States)

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency.