WorldWideScience

Sample records for networked virtual supercomputers

  1. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  2. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  3. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  4. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  5. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  6. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  7. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  8. Virtual laboratory for fusion research in Japan

    International Nuclear Information System (INIS)

    Tsuda, K.; Nagayama, Y.; Yamamoto, T.; Horiuchi, R.; Ishiguro, S.; Takami, S.

    2008-01-01

    A virtual laboratory system for nuclear fusion research in Japan has been developed using SuperSINET, which is a super high-speed network operated by National Institute of Informatics. Sixteen sites including major Japanese universities, Japan Atomic Energy Agency and National Institute for Fusion Science (NIFS) are mutually connected to SuperSINET with the speed of 1 Gbps by the end of 2006 fiscal year. Collaboration categories in this virtual laboratory are as follows: the large helical device (LHD) remote participation; the remote use of supercomputer system; and the all Japan ST (Spherical Tokamak) research program. This virtual laboratory is a closed network system, and is connected to the Internet through the NIFS firewall in order to keep higher security. Collaborators in a remote station can control their diagnostic devices at LHD and analyze the LHD data as they were at the LHD control room. Researchers in a remote station can use the supercomputer of NIFS in the same environment as NIFS. In this paper, we will describe detail of technologies and the present status of the virtual laboratory. Furthermore, the items that should be developed in the near future are also described

  9. Organization Virtual or Networked?

    Directory of Open Access Journals (Sweden)

    Rūta Tamošiūnaitė

    2013-08-01

    Full Text Available Purpose—to present distinction between “virtual organization” and “networked organization”; giving their definitions.Design/methodology/approach—review of previous researches, systemic analyses of their findings and synthesis of distinctive characteristics of ”virtual organization” and “networked organization.”Findings—the main result of the research is key diverse features separating ”virtual organization” and ”networked organization.” Definitions of “virtual organization” and “networked organization” are presented.Originality/Value—distinction between “virtual organization” and “networked organization” creates possibilities to use all advantages of those types of organizations and gives foundation for deeper researches in this field.Research type: general review.

  10. Setting up virtual private network

    International Nuclear Information System (INIS)

    Huang Hongmei; Zhang Chengjun

    2003-01-01

    Setting up virtual private network for business enterprise provides a low cost network foundation, increases enterprise's network function and enlarges its private scope. The text introduces virtual private network's principal, privileges and protocols that use in virtual private network. At last, this paper introduces several setting up virtual private network's technologies which based on LAN

  11. Setting up virtual private network

    International Nuclear Information System (INIS)

    Huang Hongmei; Zhang Chengjun

    2003-01-01

    Setting up virtual private network for business enterprise provides a low cost network foundation, increases enterprise network function and enlarges its private scope. This text introduces virtual private network principal, privileges and protocols applied in virtual private network. At last, this paper introduces several setting up virtual private network technologies which is based on LAN

  12. UbiWorld: An environment integrating virtual reality, supercomputing, and design

    Energy Technology Data Exchange (ETDEWEB)

    Disz, T.; Papka, M.E.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    UbiWorld is a concept being developed by the Futures Laboratory group at Argonne National Laboratory that ties together the notion of ubiquitous computing (Ubicomp) with that of using virtual reality for rapid prototyping. The goal is to develop an environment where one can explore Ubicomp-type concepts without having to build real Ubicomp hardware. The basic notion is to extend object models in a virtual world by using distributed wide area heterogeneous computing technology to provide complex networking and processing capabilities to virtual reality objects.

  13. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  14. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  15. Virtual Networking Performance in OpenStack Platform for Network Function Virtualization

    Directory of Open Access Journals (Sweden)

    Franco Callegati

    2016-01-01

    Full Text Available The emerging Network Function Virtualization (NFV paradigm, coupled with the highly flexible and programmatic control of network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV on OpenStack.

  16. Virtualized Networks and Virtualized Optical Line Terminal (vOLT)

    Science.gov (United States)

    Ma, Jonathan; Israel, Stephen

    2017-03-01

    The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.

  17. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  18. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  19. Ecological network analysis for a virtual water network.

    Science.gov (United States)

    Fang, Delin; Chen, Bin

    2015-06-02

    The notions of virtual water flows provide important indicators to manifest the water consumption and allocation between different sectors via product transactions. However, the configuration of virtual water network (VWN) still needs further investigation to identify the water interdependency among different sectors as well as the network efficiency and stability in a socio-economic system. Ecological network analysis is chosen as a useful tool to examine the structure and function of VWN and the interactions among its sectors. A balance analysis of efficiency and redundancy is also conducted to describe the robustness (RVWN) of VWN. Then, network control analysis and network utility analysis are performed to investigate the dominant sectors and pathways for virtual water circulation and the mutual relationships between pairwise sectors. A case study of the Heihe River Basin in China shows that the balance between efficiency and redundancy is situated on the left side of the robustness curve with less efficiency and higher redundancy. The forestation, herding and fishing sectors and industrial sectors are found to be the main controllers. The network tends to be more mutualistic and synergic, though some competitive relationships that weaken the virtual water circulation still exist.

  20. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  1. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  2. Network function virtualization concepts and applicability in 5G networks

    CERN Document Server

    Zhang, Ying

    2018-01-01

    A horizontal view of newly emerged technologies in the field of network function virtualization (NFV), introducing the open source implementation efforts that bring NFV from design to reality This book explores the newly emerged technique of network function virtualization (NFV) through use cases, architecture, and challenges, as well as standardization and open source implementations. It is the first systematic source of information about cloud technologies' usage in the cellular network, covering the interplay of different technologies, the discussion of different design choices, and its impact on our future cellular network. Network Function Virtualization: Concepts and Applicability in 5G Networks reviews new technologies that enable NFV, such as Software Defined Networks (SDN), network virtualization, and cloud computing. It also provides an in-depth investigation of the most advanced open source initiatives in this area, including OPNFV, Openstack, and Opendaylight. Finally, this book goes beyond li...

  3. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  4. Energy-aware virtual network embedding in flexi-grid networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng

    2017-11-27

    Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.

  5. Energy-aware virtual network embedding in flexi-grid optical networks

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng; Chen, Bin

    2018-01-01

    Virtual network embedding (VNE) problem is to map multiple heterogeneous virtual networks (VN) on a shared substrate network, which mitigate the ossification of the substrate network. Meanwhile, energy efficiency has been widely considered in the network design. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the power increment of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low energy consumption. Numerical results show the functionality of the heuristic algorithm in a 24-node network.

  6. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  7. Virtual network embedding in cross-domain network based on topology and resource attributes

    Science.gov (United States)

    Zhu, Lei; Zhang, Zhizhong; Feng, Linlin; Liu, Lilan

    2018-03-01

    Aiming at the network architecture ossification and the diversity of access technologies issues, this paper researches the cross-domain virtual network embedding algorithm. By analysing the topological attribute from the local and global perspective of nodes in the virtual network and the physical network, combined with the local network resource property, we rank the embedding priority of the nodes with PCA and TOPSIS methods. Besides, the link load distribution is considered. Above all, We proposed an cross-domain virtual network embedding algorithm based on topology and resource attributes. The simulation results depicts that our algorithm increases the acceptance rate of multi-domain virtual network requests, compared with the existing virtual network embedding algorithm.

  8. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  9. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  10. Modelling of virtual production networks

    Directory of Open Access Journals (Sweden)

    2011-03-01

    Full Text Available Nowadays many companies, especially small and medium-sized enterprises (SMEs, specialize in a limited field of production. It requires forming virtual production networks of cooperating enterprises to manufacture better, faster and cheaper. Apart from that, some production orders cannot be realized, because there is not a company of sufficient production potential. In this case the virtual production networks of cooperating companies can realize these production orders. These networks have larger production capacity and many different resources. Therefore it can realize many more production orders together than each of them separately. Such organization allows for executing high quality product. The maintenance costs of production capacity and used resources are not so high. In this paper a methodology of rapid prototyping of virtual production networks is proposed. It allows to execute production orders on time considered existing logistic constraints.

  11. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  12. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  13. Shared protection based virtual network mapping in space division multiplexing optical networks

    Science.gov (United States)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  14. Virtual Organizations: Beyond Network Organization

    Directory of Open Access Journals (Sweden)

    Liviu Gabriel CRETU

    2006-01-01

    Full Text Available One of the most used buzz-words in (e-business literature of the last decade is virtual organization. The term "virtual" can be identified in all sorts of combinations regarding the business world. From virtual products to virtual processes or virtual teams, everything that is “touched” by the computer’s processing power instantly becomes virtual. Moreover, most of the literature treats virtual and network organizations as being synonyms. This paper aims to draw a much more distinctive line between the two concepts. Providing a more coherent description of what virtual organization might be is also one of our intentions.

  15. Virtual Stationary Automata for Mobile Networks

    National Research Council Canada - National Science Library

    Dolev, Shlomi; Gilbert, Seth; Lahiani, Limor; Lynch, Nancy; Nolte, Tina

    2005-01-01

    We define a programming abstraction for mobile networks called the Virtual Stationary Automata programming layer, consisting of real mobile clients, virtual timed I/O automata called virtual stationary automata (VSAs...

  16. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  17. VNML: Virtualized Network Management Laboratory for Educational ...

    African Journals Online (AJOL)

    VNML: Virtualized Network Management Laboratory for Educational Purposes. ... Journal of Fundamental and Applied Sciences ... In this paper, we implement a Virtualized Network Management Laboratory named (VNML) linked to college ...

  18. Topological Embedding Feature Based Resource Allocation in Network Virtualization

    Directory of Open Access Journals (Sweden)

    Hongyan Cui

    2014-01-01

    Full Text Available Virtualization provides a powerful way to run multiple virtual networks on a shared substrate network, which needs accurate and efficient mathematical models. Virtual network embedding is a challenge in network virtualization. In this paper, considering the degree of convergence when mapping a virtual network onto substrate network, we propose a new embedding algorithm based on topology mapping convergence-degree. Convergence-degree means the adjacent degree of virtual network’s nodes when they are mapped onto a substrate network. The contributions of our method are as below. Firstly, we map virtual nodes onto the substrate nodes with the maximum convergence-degree. The simulation results show that our proposed algorithm largely enhances the network utilization efficiency and decreases the complexity of the embedding problem. Secondly, we define the load balance rate to reflect the load balance of substrate links. The simulation results show our proposed algorithm achieves better load balance. Finally, based on the feature of star topology, we further improve our embedding algorithm and make it suitable for application in the star topology. The test result shows it gets better performance than previous works.

  19. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  20. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  1. A virtual network computer's optical storage virtualization scheme

    Science.gov (United States)

    Wang, Jianzong; Hu, Huaixiang; Wan, Jiguang; Wang, Peng

    2008-12-01

    In this paper, we present the architecture and implementation of a virtual network computers' (VNC) optical storage virtualization scheme called VOSV. Its task is to manage the mapping of virtual optical storage to physical optical storage, a technique known as optical storage virtualization. The design of VOSV aims at the optical storage resources of different clients and servers that have high read-sharing patterns. VOSV uses several schemes such as a two-level Cache mechanism, a VNC server embedded module and the iSCSI protocols to improve the performance. The results measured on the prototype are encouraging, and indicating that VOSV provides the high I/O performance.

  2. A distributed framework for inter-domain virtual network embedding

    Science.gov (United States)

    Wang, Zihua; Han, Yanni; Lin, Tao; Tang, Hui

    2013-03-01

    Network virtualization has been a promising technology for overcoming the Internet impasse. A main challenge in network virtualization is the efficient assignment of virtual resources. Existing work focused on intra-domain solutions whereas inter-domain situation is more practical in realistic setting. In this paper, we present a distributed inter-domain framework for mapping virtual networks to physical networks which can ameliorate the performance of the virtual network embedding. The distributed framework is based on a Multi-agent approach. A set of messages for information exchange is defined. We design different operations and IPTV use scenarios to validate the advantages of our framework. Use cases shows that our framework can solve the inter-domain problem efficiently.

  3. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  4. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  5. Resource slicing in virtual wireless networks: a survey

    OpenAIRE

    Richart, Matias; Baliosian De Lazzari, Javier Ernesto; Serrat Fernández, Juan; Gorricho Moreno, Juan Luis

    2016-01-01

    New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and share...

  6. Designing communication and remote controlling of virtual instrument network system

    Science.gov (United States)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  7. Designing communication and remote controlling of virtual instrument network system

    International Nuclear Information System (INIS)

    Lei Lin; Wang Houjun; Zhou Xue; Zhou Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful

  8. Performance verification of network function virtualization in software defined optical transport networks

    Science.gov (United States)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  9. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  10. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  11. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  12. A survey of middleware for sensor and network virtualization.

    Science.gov (United States)

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd

    2014-12-12

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.

  13. A Survey of Middleware for Sensor and Network Virtualization

    Science.gov (United States)

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.

    2014-01-01

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737

  14. Virtual Lab for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    PICOVICI, D.

    2008-06-01

    Full Text Available This article details an experimental system developed to enhance the education and research in the area of wireless networks technologies. The system referred, as Virtual Lab (VL is primarily targeting first time users or users with limited experience in programming and using wireless sensor networks. The VL enables a set of predefined sensor networks to be remotely accessible and controlled for constructive and time-efficient experimentation. In order to facilitate the user's wireless sensor applications, the VL is using three main components: a a Virtual Lab Motes (VLM, representing the wireless sensor, b a Virtual Lab Client (VLC, representing the user's tool to interact with the VLM and c a Virtual Lab Server (VLS representing the software link between the VLM and VLC. The concept has been proven using the moteiv produced Tmote Sky modules. Initial experimental use clearly demonstrates that the VL approach reduces dramatically the learning curve involved in programming and using the associated wireless sensor nodes. In addition the VL allows the user's focus to be directed towards the experiment and not towards the software programming challenges.

  15. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  16. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  17. A User-Customized Virtual Network Platform for NaaS Cloud

    Directory of Open Access Journals (Sweden)

    Lei Xiao

    2016-01-01

    Full Text Available Now all kinds of public cloud providers take computing and storage resources as the user’s main demand, making it difficult for users to deploy complex network in the public cloud. This paper proposes a virtual cloud platform with network as the core demand of the user, which can provide the user with the capacity of free network architecture as well as all kinds of virtual resources. The network is isolated by port groups of the virtual distributed switch and the data forwarding and access control between different network segments are implemented by virtual machines loading a soft-routing system. This paper also studies the management interface of network architecture and the uniform way to connect the remote desktop of virtual resources on the web, hoping to provide some new ideas for the Network as a Service model.

  18. Security for Virtual Private Networks

    OpenAIRE

    Magdalena Nicoleta Iacob

    2015-01-01

    Network security must be a permanent concern for every company, given the fact that threats are evolving today more rapidly than in the past. This paper contains a general classification of cryptographic algorithms used in today networks and presents an implementation of virtual private networks using one of the most secure methods - digital certificates authentication.

  19. Mobile Virtual Private Networking

    Science.gov (United States)

    Pulkkis, Göran; Grahn, Kaj; Mårtens, Mathias; Mattsson, Jonny

    Mobile Virtual Private Networking (VPN) solutions based on the Internet Security Protocol (IPSec), Transport Layer Security/Secure Socket Layer (SSL/TLS), Secure Shell (SSH), 3G/GPRS cellular networks, Mobile IP, and the presently experimental Host Identity Protocol (HIP) are described, compared and evaluated. Mobile VPN solutions based on HIP are recommended for future networking because of superior processing efficiency and network capacity demand features. Mobile VPN implementation issues associated with the IP protocol versions IPv4 and IPv6 are also evaluated. Mobile VPN implementation experiences are presented and discussed.

  20. Expanding Usability of Virtual Network Laboratory in IT Engineering Education

    Directory of Open Access Journals (Sweden)

    Dalibor M Dobrilovic

    2013-02-01

    Full Text Available This paper deals with importance of virtual network laboratories usage in IT engineering education. It presents the particular virtual network laboratory model developed for usage in Computer Networks course as well. This virtual network laboratory, called VNLab, is based on virtualization technology. It has been successfully tested in educational process of Computer Network course for IT undergraduate students. Its usability for network related courses is analyzed by comparison of recommended curricula’s of world organizations such as IEEE, ACM and AIS. This paper is focused on expanding the usability of this virtual network laboratory to other non-network related courses. The primary expansion field is in domain of IT System Administration, IT Systems and Data Security and Operating Systems as well. The possible learning scenarios, learning tools and concepts for making this system applicable in these three additional fields are presented by the analyses of compatibility with recommended learning topics and outcomes by IEEE, ACM and AIS.

  1. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  2. Cloud-Centric and Logically Isolated Virtual Network Environment Based on Software-Defined Wide Area Network

    Directory of Open Access Journals (Sweden)

    Dongkyun Kim

    2017-12-01

    Full Text Available Recent development of distributed cloud environments requires advanced network infrastructure in order to facilitate network automation, virtualization, high performance data transfer, and secured access of end-to-end resources across regional boundaries. In order to meet these innovative cloud networking requirements, software-defined wide area network (SD-WAN is primarily demanded to converge distributed cloud resources (e.g., virtual machines (VMs in a programmable and intelligent manner over distant networks. Therefore, this paper proposes a logically isolated networking scheme designed to integrate distributed cloud resources to dynamic and on-demand virtual networking over SD-WAN. The performance evaluation and experimental results of the proposed scheme indicate that virtual network convergence time is minimized in two different network models such as: (1 an operating OpenFlow-oriented SD-WAN infrastructure (KREONET-S which is deployed on the advanced national research network in Korea, and (2 Mininet-based experimental and emulated networks.

  3. When Rural Reality Goes Virtual.

    Science.gov (United States)

    Husain, Dilshad D.

    1998-01-01

    In rural towns where sparse population and few business are barriers, virtual reality may be the only way to bring work-based learning to students. A partnership between a small-town high school, the Ohio Supercomputer Center, and a high-tech business will enable students to explore the workplace using virtual reality. (JOW)

  4. PROVISIONING RESTORABLE VIRTUAL PRIVATE NETWORKS USING BARABASI AND WAXMAN TOPOLOGY GENERATION MODEL

    Directory of Open Access Journals (Sweden)

    R. Ravi

    2010-12-01

    Full Text Available As internet usage grows exponentially, network security issues become increasingly important. Network security measures are needed to protect data during transmission. Various security controls are used to prevent the access of hackers in networks. They are firewall, virtual private networks and encryption algorithms. Out of these, the virtual private network plays a vital role in preventing hackers from accessing the networks. A Virtual Private Network (VPN provides end users with a way to privately access information on their network over a public network infrastructure such as the internet. Using a technique called “Tunneling”, data packets are transmitted across a public routed network, such as the internet that simulates a point-to-point connection. Virtual private networks provide customers with a secure and low-cost communication environment. The basic structure of the virtual circuit is to create a logical path from the source port to the destination port. This path may incorporate many hops between routers for the formation of the circuit. The final, logical path or virtual circuit acts in the same way as a direct connection between the two ports. Our proposed Provisioning Restorable Virtual Private Networks Algorithm (PRA is used to combine the provisioning and restoration algorithms to achieve better results than the ones obtained by independent restoration and provisioning. In order to ensure service quality and availability in Virtual Private Networks, seamless recovery from failures is essential. The quality of service of the Virtual Private Networks is also improved due to the combination of provisioning and restoration. The bandwidth sharing concept is also applied in link to improve the quality of service in the Virtual Private Network. The performance analysis of the proposed algorithm is carried out in terms of cost, the number of nodes, the number of VPN nodes, delay, asymmetric ratio and delay with constraints with

  5. Propagation of crises in the virtual water trade network

    Science.gov (United States)

    Tamea, Stefania; Laio, Francesco; Ridolfi, Luca

    2015-04-01

    The international trade of agricultural goods is associated to the displacement of the water used to produce such goods and embedded in trade as a factor of production. Water virtually exchanged from producing to consuming countries, named virtual water, defines flows across an international network of 'virtual water trade' which enable the assessment of environmental forcings and implications of trade, such as global water savings or country dependencies on foreign water resources. Given the recent expansion of commodity (and virtual water) trade, in both displaced volumes and network structure, concerns have been raised about the exposure to crises of individuals and societies. In fact, if one country had to markedly decrease its export following a socio-economical or environmental crisis, such as a war or a drought, many -if not all- countries would be affected due to a cascade effect within the trade network. The present contribution proposes a mechanistic model describing the propagation of a local crisis into the virtual water trade network, accounting for the network structure and the virtual water balance of all countries. The model, built on data-based assumptions, is tested on the real case study of the Argentinean crisis in 2008-09, when the internal agricultural production (measured as virtual water volume) decreased by 26% and the virtual water export of Argentina dropped accordingly. Crisis propagation and effects on the virtual water trade are correctly captured, showing the way forward to investigations of crises impact and country vulnerability based on the results of the model proposed.

  6. Massivizing Networked Virtual Environments on Clouds

    NARCIS (Netherlands)

    Shen, S.

    2015-01-01

    Networked Virtual Environments (NVEs) are virtual environments where physically distributed, Internet-connected users can interact and socialize with others. The most popular NVEs are online games, which have hundreds of millions of users and a global market of tens of billions Euros per year.

  7. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  8. Teaching Network Security in a Virtual Learning Environment

    Science.gov (United States)

    Bergstrom, Laura; Grahn, Kaj J.; Karlstrom, Krister; Pulkkis, Goran; Astrom, Peik

    2004-01-01

    This article presents a virtual course with the topic network security. The course has been produced by Arcada Polytechnic as a part of the production team Computer Networks, Telecommunication and Telecommunication Systems in the Finnish Virtual Polytechnic. The article begins with an introduction to the evolution of the information security…

  9. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  10. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  11. Virtual private network (VPN)

    International Nuclear Information System (INIS)

    Caskey, Susan

    2006-01-01

    A virtual private network (VPN) is the essential security feature that allows remote monitoring systems to take advantage of the low communications cost of the internet. This paper introduces the VPN concept and summarizes the networking and security principles. The mechanics of security, for example, types of encryption and protocols for exchange of keys between partners, are explained. Important issues for partners in different countries include the interoperability and mutual accreditations of systems. (author)

  12. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  13. Tensor Network Quantum Virtual Machine (TNQVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  14. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  15. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  16. A Survey on Virtualization of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ga-Won Lee

    2012-02-01

    Full Text Available Wireless Sensor Networks (WSNs are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.

  17. A Survey on Virtualization of Wireless Sensor Networks

    Science.gov (United States)

    Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759

  18. A survey on virtualization of Wireless Sensor Networks.

    Science.gov (United States)

    Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.

  19. Virtual Network Embedding via Monte Carlo Tree Search.

    Science.gov (United States)

    Haeri, Soroush; Trajkovic, Ljiljana

    2018-02-01

    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  20. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  1. Cloudified Mobility and Bandwidth Prediction in Virtualized LTE Networks

    NARCIS (Netherlands)

    Zhao, Zongliang; Karimzadeh Motallebi Azar, Morteza; Braun, Torsten; Pras, Aiko; van den Berg, Hans Leo

    Network Function Virtualization involves implementing network functions (e.g., virtualized LTE component) in software that can run on a range of industry standard server hardware, and can be migrated or instantiated on demand. A prediction service hosted on cloud infrastructures enables consumers to

  2. Virtual networks pluralistic approach for the next generation of Internet

    CERN Document Server

    Duarte, Otto Carlos M B

    2013-01-01

    The first chapter of this title concerns virtualization techniques that allow sharing computational resources basically, slicing a real computational environment into virtual computational environments that are isolated from one another.The Xen and OpenFlow virtualization platforms are then presented in Chapter 2 and a performance analysis of both is provided. This chapter also defines the primitives that the network virtualization infrastructure must provide for allowing the piloting plane to manage virtual network elements.Following this, interfaces for system management of the two platform

  3. Server virtualization management of corporate network with hyper-v

    OpenAIRE

    Kovalenko, Taras

    2012-01-01

    On a paper main tasks and problems of server virtualization are considerate. Practical value of virtualization in a corporate network, advantages and disadvantages of application of server virtualization are also considerate.

  4. Virtual terrain: a security-based representation of a computer network

    Science.gov (United States)

    Holsopple, Jared; Yang, Shanchieh; Argauer, Brian

    2008-03-01

    Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.

  5. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  6. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  7. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  8. Modeling virtualized downlink cellular networks with ultra-dense small cells

    KAUST Repository

    Ibrahim, Hazem

    2015-09-11

    The unrelenting increase in the mobile users\\' populations and traffic demand drive cellular network operators to densify their infrastructure. Network densification increases the spatial frequency reuse efficiency while maintaining the signal-to-interference-plus-noise-ratio (SINR) performance, hence, increases the spatial spectral efficiency and improves the overall network performance. However, control signaling in such dense networks consumes considerable bandwidth and limits the densification gain. Radio access network (RAN) virtualization via control plane (C-plane) and user plane (U-plane) splitting has been recently proposed to lighten the control signaling burden and improve the network throughput. In this paper, we present a tractable analytical model for virtualized downlink cellular networks, using tools from stochastic geometry. We then apply the developed modeling framework to obtain design insights for virtualized RANs and quantify associated performance improvement. © 2015 IEEE.

  9. Evaluating the Limits of Network Topology Inference Via Virtualized Network Emulation

    Science.gov (United States)

    2015-06-01

    virtualized environment. First, we automatically build topological ground truth according to various network generation models and create emulated Cisco ...to various network generation models and create emulated Cisco router networks by leveraging and modifying existing emulation software. We then au... markets , to verifying compliance with policy, as in recent “network neutrality” rules established in the United States. The Internet is a network of

  10. Developing a Virtual Network of Research Observatories

    Science.gov (United States)

    Hooper, R. P.; Kirschtl, D.

    2008-12-01

    The hydrologic community has been discussing the concept of a network of observatories for the advancement of hydrologic science in areas of scaling processes, in testing generality of hypotheses, and in examining non-linear couplings between hydrologic, biotic, and human systems. The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) is exploring the formation of a virtual network of observatories, formed from existing field studies without regard to funding source. Such a network would encourage sharing of data, metadata, field methods, and data analysis techniques to enable multidisciplinary synthesis, meta-analysis, and scientific collaboration in hydrologic and environmental science and engineering. The virtual network would strive to provide both the data and the environmental context of the data through advanced cyberinfrastructure support. The foundation for this virtual network is Water Data Services that enable the publication of time-series data collected at fixed points using a services-oriented architecture. These publication services, developed in the CUAHSI Hydrologic Information Systems project, permit the discovery of data from both academic and government sources through a single portal. Additional services under consideration are publication of geospatial data sets, immersive environments based upon site digital elevation models, and a common web portal to member sites populated with structured data about the site (such as land use history and geologic setting) to permit understanding the environmental context of the data being shared.

  11. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  12. Evolution of the global virtual water trade network.

    Science.gov (United States)

    Dalin, Carole; Konar, Megan; Hanasaki, Naota; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2012-04-17

    Global freshwater resources are under increasing pressure from economic development, population growth, and climate change. The international trade of water-intensive products (e.g., agricultural commodities) or virtual water trade has been suggested as a way to save water globally. We focus on the virtual water trade network associated with international food trade built with annual trade data and annual modeled virtual water content. The evolution of this network from 1986 to 2007 is analyzed and linked to trade policies, socioeconomic circumstances, and agricultural efficiency. We find that the number of trade connections and the volume of water associated with global food trade more than doubled in 22 years. Despite this growth, constant organizational features were observed in the network. However, both regional and national virtual water trade patterns significantly changed. Indeed, Asia increased its virtual water imports by more than 170%, switching from North America to South America as its main partner, whereas North America oriented to a growing intraregional trade. A dramatic rise in China's virtual water imports is associated with its increased soy imports after a domestic policy shift in 2000. Significantly, this shift has led the global soy market to save water on a global scale, but it also relies on expanding soy production in Brazil, which contributes to deforestation in the Amazon. We find that the international food trade has led to enhanced savings in global water resources over time, indicating its growing efficiency in terms of global water use.

  13. A Service-Oriented Approach for Dynamic Chaining of Virtual Network Functions over Multi-Provider Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    Barbara Martini

    2016-06-01

    Full Text Available Emerging technologies such as Software-Defined Networks (SDN and Network Function Virtualization (NFV promise to address cost reduction and flexibility in network operation while enabling innovative network service delivery models. However, operational network service delivery solutions still need to be developed that actually exploit these technologies, especially at the multi-provider level. Indeed, the implementation of network functions as software running over a virtualized infrastructure and provisioned on a service basis let one envisage an ecosystem of network services that are dynamically and flexibly assembled by orchestrating Virtual Network Functions even across different provider domains, thereby coping with changeable user and service requirements and context conditions. In this paper we propose an approach that adopts Service-Oriented Architecture (SOA technology-agnostic architectural guidelines in the design of a solution for orchestrating and dynamically chaining Virtual Network Functions. We discuss how SOA, NFV, and SDN may complement each other in realizing dynamic network function chaining through service composition specification, service selection, service delivery, and placement tasks. Then, we describe the architecture of a SOA-inspired NFV orchestrator, which leverages SDN-based network control capabilities to address an effective delivery of elastic chains of Virtual Network Functions. Preliminary results of prototype implementation and testing activities are also presented. The benefits for Network Service Providers are also described that derive from the adaptive network service provisioning in a multi-provider environment through the orchestration of computing and networking services to provide end users with an enhanced service experience.

  14. Crosstalk-aware virtual network embedding over inter-datacenter optical networks with few-mode fibers

    Science.gov (United States)

    Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo

    2017-12-01

    Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.

  15. Multi-agent: a technique to implement geo-visualization of networked virtual reality

    Science.gov (United States)

    Lin, Zhiyong; Li, Wenjing; Meng, Lingkui

    2007-06-01

    Networked Virtual Reality (NVR) is a system based on net connected and spatial information shared, whose demands cannot be fully meet by the existing architectures and application patterns of VR to some extent. In this paper, we propose a new architecture of NVR based on Multi-Agent framework. which includes the detailed definition of various agents and their functions and full description of the collaboration mechanism, Through the prototype system test with DEM Data and 3D Models Data, the advantages of Multi-Agent based Networked Virtual Reality System in terms of the data loading time, user response time and scene construction time etc. are verified. First, we introduce the characters of Networked Virtual Realty and the characters of Multi-Agent technique in Section 1. Then we give the architecture design of Networked Virtual Realty based on Multi-Agent in Section 2.The Section 2 content includes the rule of task division, the multi-agent architecture design to implement Networked Virtual Realty and the function of agents. Section 3 shows the prototype implementation according to the design. Finally, Section 4 discusses the benefits of using Multi-Agent to implement geovisualization of Networked Virtual Realty.

  16. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  17. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks.

    Science.gov (United States)

    Murakami, Masaya; Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki

    2018-04-08

    Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.

  18. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks

    Directory of Open Access Journals (Sweden)

    Masaya Murakami

    2018-04-01

    Full Text Available Virtualization of wireless sensor networks (WSN is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions and nodes (i.e., neurons. We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.

  19. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  20. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  1. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  2. Virtualized Network Function Orchestration System and Experimental Network Based QR Recognition for a 5G Mobile Access Network

    Directory of Open Access Journals (Sweden)

    Misun Ahn

    2017-12-01

    Full Text Available This paper proposes a virtualized network function orchestration system based on Network Function Virtualization (NFV, one of the main technologies in 5G mobile networks. This system should provide connectivity between network devices and be able to create flexible network function and distribution. This system focuses more on access networks. By experimenting with various scenarios of user service established and activated in a network, we examine whether rapid adoption of new service is possible and whether network resources can be managed efficiently. The proposed method is based on Bluetooth transfer technology and mesh networking to provide automatic connections between network machines and on a Docker flat form, which is a container virtualization technology for setting and managing key functions. Additionally, the system includes a clustering and recovery measure regarding network function based on the Docker platform. We will briefly introduce the QR code perceived service as a user service to examine the proposal and based on this given service, we evaluate the function of the proposal and present analysis. Through the proposed approach, container relocation has been implemented according to a network device’s CPU usage and we confirm successful service through function evaluation on a real test bed. We estimate QR code recognition speed as the amount of network equipment is gradually increased, improving user service and confirm that the speed of recognition is increased as the assigned number of network devices is increased by the user service.

  3. Converged Optical Network and Data Center Virtual Infrastructure Planning

    DEFF Research Database (Denmark)

    Georgakilas, Konstantinos; Tzanakaki, Anna; Anastasopoulos, Markos

    2012-01-01

    This paper presents a detailed study of planning virtual infrastructures (VIs) over a physical infrastructure comprising integrated optical network and data center resources with the aim of enabling sharing of physical resources among several virtual operators and services. Through the planning...... process, the VI topology and virtual resources are identified and mapped to the physical resources. Our study assumes a practical VI demand model without any in advance global knowledge of the VI requests that are handled sequentially. Through detailed integer linear program modeling, two objective...... functions—one that minimizes the overall power consumption of the infrastructure and one that minimizes the wavelength utilization—are compared. Both are evaluated for the virtual wavelength path and wavelength path optical network architectures. The first objective results in power consumption savings...

  4. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  5. Cognitive virtual network operator games

    CERN Document Server

    Duan, Lingjie; Shou, Biying

    2014-01-01

    This SpringerBrief provides an overview of cognitive mobile virtual network operator's (C-MVNO) decisions under investment flexibility, supply uncertainty, and market competition in cognitive radio networks. This is a new research area at the nexus of cognitive radio engineering and microeconomics. The authors focus on an operator's joint spectrum investment and service pricing decisions. The readers will learn how to tradeoff the two flexible investment choices (dynamic spectrum leasing and spectrum sensing) under supply uncertainty. Furthermore, if there is more than one operator, we present

  6. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    Science.gov (United States)

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  7. Clustered Data Management in Virtual Docker Networks Spanning Geo-Redundant Data Centers : A Performance Evaluation Study of Docker Networking

    OpenAIRE

    Alansari, Hayder

    2017-01-01

    Software containers in general and Docker in particular is becoming more popular both in software development and deployment. Software containers are intended to be a lightweight virtualization that provides the isolation of virtual machines with a performance that is close to native. Docker does not only provide virtual isolation but also virtual networking to connect the isolated containers in the desired way. Many alternatives exist when it comes to the virtual networking provided by Docke...

  8. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  9. Virtualized cloud data center networks issues in resource management

    CERN Document Server

    Tsai, Linjiun

    2016-01-01

    This book discusses the characteristics of virtualized cloud networking, identifies the requirements of cloud network management, and illustrates the challenges in deploying virtual clusters in multi-tenant cloud data centers. The book also introduces network partitioning techniques to provide contention-free allocation, topology-invariant reallocation, and highly efficient resource utilization, based on the Fat-tree network structure. Managing cloud data center resources without considering resource contentions among different cloud services and dynamic resource demands adversely affects the performance of cloud services and reduces the resource utilization of cloud data centers. These challenges are mainly due to strict cluster topology requirements, resource contentions between uncooperative cloud services, and spatial/temporal data center resource fragmentation. Cloud data center network resource allocation/reallocation which cope well with such challenges will allow cloud services to be provisioned with ...

  10. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  11. Networking support for collaborative virtual reality projects in national, european and international context

    OpenAIRE

    Hommes, F.; Pless, E.

    2004-01-01

    The report describes experiences from networking support for two three years virtual reality projects. Networking requirements depending on the virtual reality environment and the planned distributed scenarios are specified and verified in the real network. Networking problems especially due to the collaborative, distributed character of interaction via the Internet are presented.

  12. Joint Orchestration of Cloud-Based Microservices and Virtual Network Functions

    OpenAIRE

    Kouchaksaraei, Hadi Razzaghi; Karl, Holger

    2018-01-01

    Recent studies show the increasing popularity of distributed cloud applications, which are composed of multiple microservices. Besides their known benefits, microservice architecture also enables to mix and match cloud applications and Network Function Virtualization (NFV) services (service chains), which are composed of Virtual Network Functions (VNFs). Provisioning complex services containing VNFs and microservices in a combined NFV/cloud platform can enhance service quality and optimise co...

  13. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  14. Energy-efficient virtual optical network mapping approaches over converged flexible bandwidth optical networks and data centers.

    Science.gov (United States)

    Chen, Bowen; Zhao, Yongli; Zhang, Jie

    2015-09-21

    In this paper, we develop a virtual link priority mapping (LPM) approach and a virtual node priority mapping (NPM) approach to improve the energy efficiency and to reduce the spectrum usage over the converged flexible bandwidth optical networks and data centers. For comparison, the lower bound of the virtual optical network mapping is used for the benchmark solutions. Simulation results show that the LPM approach achieves the better performance in terms of power consumption, energy efficiency, spectrum usage, and the number of regenerators compared to the NPM approach.

  15. Virtualized Network Control. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-02-01

    This document is the final report for the Virtualized Network Control (VNC) project, which was funded by the United States Department of Energy (DOE) Office of Science. This project was also informally referred to as Advanced Resource Computation for Hybrid Service and TOpology NEtworks (ARCHSTONE). This report provides a summary of the project's activities, tasks, deliverable, and accomplishments. It also provides a summary of the documents, software, and presentations generated as part of this projects activities. Namely, the Appendix contains an archive of the deliverables, documents, and presentations generated a part of this project.

  16. A Game for Energy-Aware Allocation of Virtualized Network Functions

    Directory of Open Access Journals (Sweden)

    Roberto Bruschi

    2016-01-01

    Full Text Available Network Functions Virtualization (NFV is a network architecture concept where network functionality is virtualized and separated into multiple building blocks that may connect or be chained together to implement the required services. The main advantages consist of an increase in network flexibility and scalability. Indeed, each part of the service chain can be allocated and reallocated at runtime depending on demand. In this paper, we present and evaluate an energy-aware Game-Theory-based solution for resource allocation of Virtualized Network Functions (VNFs within NFV environments. We consider each VNF as a player of the problem that competes for the physical network node capacity pool, seeking the minimization of individual cost functions. The physical network nodes dynamically adjust their processing capacity according to the incoming workload, by means of an Adaptive Rate (AR strategy that aims at minimizing the product of energy consumption and processing delay. On the basis of the result of the nodes’ AR strategy, the VNFs’ resource sharing costs assume a polynomial form in the workflows, which admits a unique Nash Equilibrium (NE. We examine the effect of different (unconstrained and constrained forms of the nodes’ optimization problem on the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategy profiles.

  17. The emergence of internet-based virtual private networks in international safeguards

    International Nuclear Information System (INIS)

    Smartt, Heidi Anne

    2001-01-01

    Full text: The costs associated with secure data transmission can be an obstacle to International Safeguards. Typical communication methods are priced by distance and may include telephone lines, frame relay, and ISDN. It is therefore costly to communicate globally. The growth of the Internet has provided an extensive backbone for global communications; however, the Internet does not provide intrinsic security measures. Combining the Internet with Virtual Private Network technology, which encrypts and authenticates data, creates a secure and potentially cost-effective data transmission path, as well as achieving other benefits such as reliability and scalability. Access to the Internet can be achieved by connecting to a local Internet Service Provider, which can be preferable to installing a static link between two distant points. The cost-effectiveness of the Internet-based Virtual Private Network is dependent on such factors as data amount, current operational costs, and the specifics of the Internet connection, such as user proximity to an Internet Service Provider or existing access to the Internet. This paper will introduce Virtual Private Network technology, the benefits of Internet communication, and the emergence of Internet-based Virtual Private Networks throughout the International Safeguards community. Specific projects to be discussed include: The completed demonstration of secure remote monitoring data transfer via the Internet between STUK in Helsinki, Finland, and the IAEA in Vienna, Austria; The demonstration of secure remote access to IAEA resources by traveling inspectors with Virtual Private Network software loaded on laptops; The proposed Action Sheets between ABACC/DOE and ARN/DOE, which will provide a link between Rio de Janeiro and Buenos Aires; The proposed use at the HIFAR research reactor, located in Australia, to provide remote monitoring data to the IAEA; The use of Virtual Private Networks by JRC, Ispra, Italy. (author)

  18. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  19. Virtual View Image over Wireless Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.

  20. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  1. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  2. An OpenFlow based network virtualization framework for the Cloud

    NARCIS (Netherlands)

    Matias, J.; Jacob, E.; Sanchez, D.; Demchenko, Y.

    2011-01-01

    The Cloud computing paradigm entails a challenging networking scenario. Due to the economy of scale, the Cloud is mainly supported by Data Center infrastructures. Therefore, virtualized environment manageability, seamless migration of virtual machines, inter-domain communication issues and

  3. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  4. Physics-Based Virtual Fly-Outs of Projectiles on Supercomputers

    National Research Council Canada - National Science Library

    Sahu, Jubaraj

    2006-01-01

    ...) have been successfully fully coupled on high performance computing (HPC) platforms for Virtual Fly-Outs of guided munitions identical to actual free flight tests in the aerodynamic experimental facilities...

  5. Enabling Research Network Connectivity to Clouds with Virtual Router Technology

    Science.gov (United States)

    Seuster, R.; Casteels, K.; Leavett-Brown, CR; Paterson, M.; Sobie, RJ

    2017-10-01

    The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international research network; however, commercial clouds are either not connected to the research network or only connect to research sites within their national boundaries. Since research network connectivity is a requirement for HEP applications, we need to find a solution that provides a high-speed connection. We are studying a solution with a virtual router that will address the use case when a commercial cloud has research network connectivity in a limited region. In this situation, we host a virtual router in our HEP site and require that all traffic from the commercial site transit through the virtual router. Although this may increase the network path and also the load on the HEP site, it is a workable solution that would enable the use of the remote cloud for low I/O applications. We are exploring some simple open-source solutions. In this paper, we present the results of our studies and how it will benefit our use of private and public clouds for HEP computing.

  6. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    OpenAIRE

    Blanco, Bego; Taboada, Ianire; Fajardo, Jose Oscar; Liberal, Fidel

    2017-01-01

    In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-awa...

  7. Assuring virtual network function image integrity and host sealing in telco cloud

    NARCIS (Netherlands)

    Lal, S.; Ravidas, S.; Oliver, I.; Taleb, T.

    In Telco cloud environment, virtual network func- tions (VNFs) can be shipped in the form of virtual machine images and hosted over commodity hardware. It is likely that these VNF images will contain highly sensitive data and mission critical network operations. For this reason, these VNF images are

  8. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  9. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  10. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  11. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  12. Libraries' Place in Virtual Social Networks

    Science.gov (United States)

    Mathews, Brian S.

    2007-01-01

    Do libraries belong in the virtual world of social networking? With more than 100 million users, this environment is impossible to ignore. A rising philosophy for libraries, particularly in blog-land, involves the concept of being where the users are. Simply using new media to deliver an old message is not progress. Instead, librarians should…

  13. Virtualized cognitive network architecture for 5G cellular networks

    KAUST Repository

    Elsawy, Hesham

    2015-07-17

    Cellular networks have preserved an application agnostic and base station (BS) centric architecture1 for decades. Network functionalities (e.g. user association) are decided and performed regardless of the underlying application (e.g. automation, tactile Internet, online gaming, multimedia). Such an ossified architecture imposes several hurdles against achieving the ambitious metrics of next generation cellular systems. This article first highlights the features and drawbacks of such architectural ossification. Then the article proposes a virtualized and cognitive network architecture, wherein network functionalities are implemented via software instances in the cloud, and the underlying architecture can adapt to the application of interest as well as to changes in channels and traffic conditions. The adaptation is done in terms of the network topology by manipulating connectivities and steering traffic via different paths, so as to attain the applications\\' requirements and network design objectives. The article presents cognitive strategies to implement some of the classical network functionalities, along with their related implementation challenges. The article further presents a case study illustrating the performance improvement of the proposed architecture as compared to conventional cellular networks, both in terms of outage probability and handover rate.

  14. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  15. Ecological network analysis on global virtual water trade.

    Science.gov (United States)

    Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin

    2012-02-07

    Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale.

  16. A First Step Towards Network Security Virtualization: From Concept to Prototype

    Science.gov (United States)

    2015-10-01

    software - defined networking ( SDN ) technology to virtualize network security functions. At its core... network device. Some recent technologies suggest a method to control network flows dynamically at a network device, e.g., Software - Defined Networking ( SDN ... Software - Defined Networking ( SDN ) technology and its most popular realization, OpenFlow [17], [24]. More specifically, we will use SDN

  17. Framework and implications of virtual neurorobotics

    Directory of Open Access Journals (Sweden)

    2008-07-01

    Full Text Available Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain’s interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications.

  18. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  19. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  20. Architecture and design of optical path networks utilizing waveband virtual links

    Science.gov (United States)

    Ito, Yusaku; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    We propose a novel optical network architecture that uses waveband virtual links, each of which can carry several optical paths, to directly bridge distant node pairs. Future photonic networks should not only transparently cover extended areas but also expand fiber capacity. However, the traversal of many ROADM nodes impairs the optical signal due to spectrum narrowing. To suppress the degradation, the bandwidth of guard bands needs to be increased, which degrades fiber frequency utilization. Waveband granular switching allows us to apply broader pass-band filtering at ROADMs and to insert sufficient guard bands between wavebands with minimum frequency utilization offset. The scheme resolves the severe spectrum narrowing effect. Moreover, the guard band between optical channels in a waveband can be minimized, which increases the number of paths that can be accommodated per fiber. In the network, wavelength path granular routing is done without utilizing waveband virtual links, and it still suffers from spectrum narrowing. A novel network design algorithm that can bound the spectrum narrowing effect by limiting the number of hops (traversed nodes that need wavelength path level routing) is proposed in this paper. This algorithm dynamically changes the waveband virtual link configuration according to the traffic distribution variation, where optical paths that need many node hops are effectively carried by virtual links. Numerical experiments demonstrate that the number of necessary fibers is reduced by 23% compared with conventional optical path networks.

  1. To trade or not to trade: Link prediction in the virtual water network

    Science.gov (United States)

    Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca

    2017-12-01

    In the international trade network, links express the (temporary) presence of a commercial exchange of goods between any two countries. Given the dynamical behaviour of the trade network, where links are created and dismissed every year, predicting the link activation/deactivation is an open research question. Through the international trade network of agricultural goods, water resources are 'virtually' transferred from the country of production to the country of consumption. We propose a novel methodology for link prediction applied to the network of virtual water trade. Starting from the assumption of having links between any two countries, we estimate the associated virtual water flows by means of a gravity-law model using country and link characteristics as drivers. We consider the links with estimated flows higher than 1000 m3/year as active links, while the others as non-active links. Flows traded along estimated active links are then re-estimated using a similar but differently-calibrated gravity-law model. We were able to correctly model 84% of the existing links and 93% of the non-existing links in year 2011. It is worth to note that the predicted active links carry 99% of the global virtual water flow; hence, missed links are mainly those where a minimum volume of virtual water is exchanged. Results indicate that, over the period from 1986 to 2011, population, geographical distances between countries, and agricultural efficiency (through fertilizers use) are the major factors driving the link activation and deactivation. As opposed to other (network-based) models for link prediction, the proposed method is able to reconstruct the network architecture without any prior knowledge of the network topology, using only the nodes and links attributes; it thus represents a general method that can be applied to other networks such as food or value trade networks.

  2. Virtual private network design: a proof of the tree routing conjecture on ring networks

    NARCIS (Netherlands)

    C.A.J. Hurkens (Cor); J.C.M. Keijsper; L. Stougie (Leen)

    2005-01-01

    htmlabstractA basic question in Virtual Private Network (VPN) design is if the symmetric version of the problem always has an optimal solution which is a tree network. An affirmative answer would imply that the symmetric VPN problem is solvable in polynomial time. We give an affirmative answer in

  3. Traffic routing for multicomputer networks with virtual cut-through capability

    Science.gov (United States)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  4. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  5. Virtual private network design : a proof of the tree routing conjecture on ring networks

    NARCIS (Netherlands)

    Hurkens, C.A.J.; Keijsper, J.C.M.; Stougie, L.

    2007-01-01

    A basic question in virtual private network (VPN) design is if the symmetric version of the problem always has an optimal solution which is a tree network. An affirmative answer would imply that the symmetric VPN problem is solvable in polynomial time. We give an affirmative answer in case the

  6. The Design and Analysis of Virtual Network Configuration for a Wireless Mobile ATM Network

    OpenAIRE

    Bush, Stephen F.

    1999-01-01

    This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dyna...

  7. Name-Based Address Mapping for Virtual Private Networks

    Science.gov (United States)

    Surányi, Péter; Shinjo, Yasushi; Kato, Kazuhiko

    IPv4 private addresses are commonly used in local area networks (LANs). With the increasing popularity of virtual private networks (VPNs), it has become common that a user connects to multiple LANs at the same time. However, private address ranges for LANs frequently overlap. In such cases, existing systems do not allow the user to access the resources on all LANs at the same time. In this paper, we propose name-based address mapping for VPNs, a novel method that allows connecting to hosts through multiple VPNs at the same time, even when the address ranges of the VPNs overlap. In name-based address mapping, rather than using the IP addresses used on the LANs (the real addresses), we assign a unique virtual address to each remote host based on its domain name. The local host uses the virtual addresses to communicate with remote hosts. We have implemented name-based address mapping for layer 3 OpenVPN connections on Linux and measured its performance. The communication overhead of our system is less than 1.5% for throughput and less than 0.2ms for each name resolution.

  8. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    Directory of Open Access Journals (Sweden)

    Bego Blanco

    2017-01-01

    Full Text Available In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-aware placement solution using a robust optimization approach based on service demand uncertainty in order to minimize the power consumption in the system constrained by network service latency requirements and infrastructure terms. Then, we discuss the results of the proposed placement mechanism in 5G scenarios that combine several service flavours and robust protection values. Once the impact of the service flavour and robust protection on the global power consumption of the system is analyzed, numerical results indicate that our proposal succeeds in efficiently placing the virtual network functions that compose the network services in the available hardware infrastructure while fulfilling service constraints.

  9. Triadic motifs in the dependence networks of virtual societies

    Science.gov (United States)

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2014-06-01

    In friendship networks, individuals have different numbers of friends, and the closeness or intimacy between an individual and her friends is heterogeneous. Using a statistical filtering method to identify relationships about who depends on whom, we construct dependence networks (which are directed) from weighted friendship networks of avatars in more than two hundred virtual societies of a massively multiplayer online role-playing game (MMORPG). We investigate the evolution of triadic motifs in dependence networks. Several metrics show that the virtual societies evolved through a transient stage in the first two to three weeks and reached a relatively stable stage. We find that the unidirectional loop motif (M9) is underrepresented and does not appear, open motifs are also underrepresented, while other close motifs are overrepresented. We also find that, for most motifs, the overall level difference of the three avatars in the same motif is significantly lower than average, whereas the sum of ranks is only slightly larger than average. Our findings show that avatars' social status plays an important role in the formation of triadic motifs.

  10. Triadic motifs in the dependence networks of virtual societies.

    Science.gov (United States)

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2014-06-10

    In friendship networks, individuals have different numbers of friends, and the closeness or intimacy between an individual and her friends is heterogeneous. Using a statistical filtering method to identify relationships about who depends on whom, we construct dependence networks (which are directed) from weighted friendship networks of avatars in more than two hundred virtual societies of a massively multiplayer online role-playing game (MMORPG). We investigate the evolution of triadic motifs in dependence networks. Several metrics show that the virtual societies evolved through a transient stage in the first two to three weeks and reached a relatively stable stage. We find that the unidirectional loop motif (M9) is underrepresented and does not appear, open motifs are also underrepresented, while other close motifs are overrepresented. We also find that, for most motifs, the overall level difference of the three avatars in the same motif is significantly lower than average, whereas the sum of ranks is only slightly larger than average. Our findings show that avatars' social status plays an important role in the formation of triadic motifs.

  11. Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks

    International Nuclear Information System (INIS)

    Vollmer, Todd; Manic, Milos

    2014-01-01

    A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, an established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices

  12. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  13. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks

    Science.gov (United States)

    Valdivieso Caraguay, Ángel Leonardo; García Villalba, Luis Javier

    2017-01-01

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors. PMID:28362346

  14. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks.

    Science.gov (United States)

    Caraguay, Ángel Leonardo Valdivieso; Villalba, Luis Javier García

    2017-03-31

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.

  15. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  16. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  17. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    Science.gov (United States)

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  18. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  19. Fusion virtual laboratory: The experiments' collaboration platform in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, H., E-mail: nakanisi@nifs.ac.jp [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Kojima, M.; Takahashi, C.; Ohsuna, M.; Imazu, S.; Nonomura, M. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hasegawa, M. [RIAM, Kyushu University, Kasuga, Fukuoka 816-8560 (Japan); Yoshikawa, M. [PRC, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Nagayama, Y.; Kawahata, K. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan)

    2012-12-15

    'Fusion virtual laboratory (FVL)' is the experiments' collaboration platform covering multiple fusion projects in Japan. Major Japanese fusion laboratories and universities are mutually connected through the dedicated virtual private network, named SNET, on SINET4. It has 3 different categories; (i) LHD remote participation, (ii) bilateral experiments' collaboration, and (iii) remote use of supercomputer. By extending the LABCOM data system developed at LHD, FVL supports (i) and (ii) so that it can deal with not only LHD data but also the data of two remote experiments: QUEST at Kyushu University and GAMMA10 at University of Tsukuba. FVL has applied the latest 'cloud' technology for both data acquisition and storage architecture. It can provide us high availability and performance scalability of the whole system. With a well optimized TCP data transferring method, the unified data access platform for both experimental data and numerical computation results could become realistic on FVL. The FVL project will continue demonstrating the ITER-era international collaboration schemes and the necessary technology.

  20. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  1. Research on Web-Based Networked Virtual Instrument System

    International Nuclear Information System (INIS)

    Tang, B P; Xu, C; He, Q Y; Lu, D

    2006-01-01

    The web-based networked virtual instrument (NVI) system is designed by using the object oriented methodology (OOM). The architecture of the NVI system consists of two major parts: client-web server interaction and instrument server-virtual instrument (VI) communication. The web server communicates with the instrument server and the clients connected to it over the Internet, and it handles identifying the user's name, managing the connection between the user and the instrument server, adding, removing and configuring VI's information. The instrument server handles setting the parameters of VI, confirming the condition of VI and saving the VI's condition information into the database. The NVI system is required to be a general-purpose measurement system that is easy to maintain, adapt and extend. Virtual instruments are connected to the instrument server and clients can remotely configure and operate these virtual instruments. An application of The NVI system is given in the end of the paper

  2. Analysis of the social network development of a virtual community for Australian intensive care professionals.

    Science.gov (United States)

    Rolls, Kaye Denise; Hansen, Margaret; Jackson, Debra; Elliott, Doug

    2014-11-01

    Social media platforms can create virtual communities, enabling healthcare professionals to network with a broad range of colleagues and facilitate knowledge exchange. In 2003, an Australian state health department established an intensive care mailing list to address the professional isolation experienced by senior intensive care nurses. This article describes the social network created within this virtual community by examining how the membership profile evolved from 2003 to 2009. A retrospective descriptive design was used. The data source was a deidentified member database. Since 2003, 1340 healthcare professionals subscribed to the virtual community with 78% of these (n = 1042) still members at the end of 2009. The membership profile has evolved from a single-state nurse-specific network to an Australia-wide multidisciplinary and multiorganizational intensive care network. The uptake and retention of membership by intensive care clinicians indicated that they appeared to value involvement in this virtual community. For healthcare organizations, a virtual community may be a communications option for minimizing professional and organizational barriers and promoting knowledge flow. Further research is, however, required to demonstrate a link between these broader social networks, enabling the exchange of knowledge and improved patient outcomes.

  3. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  4. Big data analytics for the virtual network topology reconfiguration use case

    OpenAIRE

    Gifre Renom, Lluís; Morales Alcaide, Fernando; Velasco Esteban, Luis Domingo; Ruiz Ramírez, Marc

    2016-01-01

    ABNO's OAM Handler is extended with big data analytics capabilities to anticipate traffic changes in volume and direction. Predicted traffic is used to trigger virtual network topology re-optimization. When the virtual topology needs to be reconfigured, predicted and current traffic matrices are used to find the optimal topology. A heuristic algorithm to adapt current virtual topology to meet both actual demands and expected traffic matrix is proposed. Experimental assessment is carried ou...

  5. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Ángel Leonardo Valdivieso Caraguay

    2017-03-01

    Full Text Available This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.

  6. Interpersonal Influence in Virtual Social Networks and Consumer Decisions

    Directory of Open Access Journals (Sweden)

    Eduardo Botti Abbade

    2014-04-01

    Full Text Available This study aimed to analyze the attitude of college students regarding to interpersonal influence in virtual social networks related to consume decisions. It was conducted a survey with 200 college students from an Institution of Higher Education located in Santa Maria/RS. The sample was obtained through voluntary adhesion and the data collection instrument was applied in a virtual environment. Scales were adapted to measure and evaluate the propensity of students to influence and be influenced by their virtual contacts. The results suggest that the scales adapted are satisfactory to measure what they intend to do. The study also found that men are more able to influence the opinions of their virtual social contacts. On the other hand, the time dedicated to access the Internet positively and significantly influences the propensity of users to be influenced by their virtual social contacts. The correlation between the ability to influence the propensity to be influenced is significant and positive.

  7. Dynamic virtual optical network embedding in spectral and spatial domains over elastic optical networks with multicore fibers

    Science.gov (United States)

    Zhu, Ruijie; Zhao, Yongli; Yang, Hui; Tan, Yuanlong; Chen, Haoran; Zhang, Jie; Jue, Jason P.

    2016-08-01

    Network virtualization can eradicate the ossification of the infrastructure and stimulate innovation of new network architectures and applications. Elastic optical networks (EONs) are ideal substrate networks for provisioning flexible virtual optical network (VON) services. However, as network traffic continues to increase exponentially, the capacity of EONs will reach the physical limitation soon. To further increase network flexibility and capacity, the concept of EONs is extended into the spatial domain. How to map the VON onto substrate networks by thoroughly using the spectral and spatial resources is extremely important. This process is called VON embedding (VONE).Considering the two kinds of resources at the same time during the embedding process, we propose two VONE algorithms, the adjacent link embedding algorithm (ALEA) and the remote link embedding algorithm (RLEA). First, we introduce a model to solve the VONE problem. Then we design the embedding ability measurement of network elements. Based on the network elements' embedding ability, two VONE algorithms were proposed. Simulation results show that the proposed VONE algorithms could achieve better performance than the baseline algorithm in terms of blocking probability and revenue-to-cost ratio.

  8. The Cure for HPC Neurosis: Multiple, Virtual Personalities!

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-06-30

    The selection of a new supercomputer for a scientific data center represents an interesting neurotic condition stemming from the conflict between a compulsion to acquire the best of the latest generation computer hardware, and unresolved issues as users seek validation from legacy scientific software - sometimes euphemistically called “research quality code”. Virtualization technology, now a mainstream feature on modern processors, permits multiple operating systems to efficiently and simultaneously run on each node of a supercomputer (or even your laptop and workstation). The benefits of this technology are many, ranging from supporting legacy software to paving the way towards robust petascale (1015 floating point operations per second) and eventually exascale (1018 floating point operations per second) computing.

  9. Cloud-Based Virtual Laboratory for Network Security Education

    Science.gov (United States)

    Xu, Le; Huang, Dijiang; Tsai, Wei-Tek

    2014-01-01

    Hands-on experiments are essential for computer network security education. Existing laboratory solutions usually require significant effort to build, configure, and maintain and often do not support reconfigurability, flexibility, and scalability. This paper presents a cloud-based virtual laboratory education platform called V-Lab that provides a…

  10. Investigating the effects of virtual social networks on entrepreneurial marketing

    Directory of Open Access Journals (Sweden)

    Kambeiz Talebi

    2014-10-01

    Full Text Available This paper presents an empirical investigation to study the effects of virtual social networks on entrepreneurial marketing. The study designs a questionnaire in Likert scale based on a model originally developed by Morris et al. (2002 [Morris, M. H., Schindehutte, M., & LaForge, R. W. (2002. Entrepreneurial marketing: a construct for integrating emerging entrepreneurship and marketing perspectives. Journal of Marketing Theory and Practice, 10(4, 1-19.]. The study considers the effects of three components of virtual social network (VSN; namely structural VSN, interaction VSN and functional VSN on entrepreneurial marketing. Using structural equation modeling, the study has determined positive and meaningful effects of all three VSN components on entrepreneurial marketing.

  11. Microwork and Virtual Production Networks in Sub-Saharan Africa ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Microwork and Virtual Production Networks in Sub-Saharan Africa and Southeast Asia ... content posted to social media sites; -categorizing products in online shops; or, ... that have realized that entry-level workers can be efficient and effective.

  12. First field trial of Virtual Network Operator oriented network on demand (NoD) service provisioning over software defined multi-vendor OTN networks

    Science.gov (United States)

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Chen, Haoran; Zhu, Ruijie; Zhou, Quanwei; Yu, Chenbei; Cui, Rui

    2017-01-01

    A Virtual Network Operator (VNO) is a provider and reseller of network services from other telecommunications suppliers. These network providers are categorized as virtual because they do not own the underlying telecommunication infrastructure. In terms of business operation, VNO can provide customers with personalized services by leasing network infrastructure from traditional network providers. The unique business modes of VNO lead to the emergence of network on demand (NoD) services. The conventional network provisioning involves a series of manual operation and configuration, which leads to high cost in time. Considering the advantages of Software Defined Networking (SDN), this paper proposes a novel NoD service provisioning solution to satisfy the private network need of VNOs. The solution is first verified in the real software defined multi-domain optical networks with multi-vendor OTN equipment. With the proposed solution, NoD service can be deployed via online web portals in near-real time. It reinvents the customer experience and redefines how network services are delivered to customers via an online self-service portal. Ultimately, this means a customer will be able to simply go online, click a few buttons and have new services almost instantaneously.

  13. The Effect of Social Network Diagrams on a Virtual Network of Practice: A Korean Case

    Science.gov (United States)

    Jo, Il-Hyun

    2009-01-01

    This study investigates the effect of the presentation of social network diagrams on virtual team members' interaction behavior via e-mail. E-mail transaction data from 22 software developers in a Korean IT company was analyzed and depicted as diagrams by social network analysis (SNA), and presented to the members as an intervention. Results…

  14. Vulnerability of countries to food-production crises propagating in the virtual water trade network

    Science.gov (United States)

    Tamea, S.; Laio, F.; Ridolfi, L.

    2015-12-01

    In recent years, the international trade of food and agricultural commodities has undergone a marked increase of exchanged volumes and an expansion of the trade network. This globalization of trade has both positive and negative effects, but the interconnectedness and external dependency of countries generate complex dynamics which are often difficult to understand and model. In this study we consider the volume of water used for the production of agricultural commodities, virtually exchanged among countries through commodity trade, i.e. the virtual water trade. Then, we set up a parsimonious mechanistic model describing the propagation, into the global trade network, of food-production crises generated locally by a social, economic or environmental event (such as war, economic crisis, drought, pest). The model, accounting for the network structure and the virtual water balance of all countries, bases on rules derived from observed virtual water flows and on data-based and statistically verified assumption. It is also tested on real case studies that prove its capability to capture the main features of crises propagation. The model is then employed as the basis for the development of an index of country vulnerability, measuring the exposure of countries to crises propagating in the virtual water trade network. Results of the analysis are discussed within the context of socio-economic and environmental conditions of countries, showing that not only water-scarce, but also wealthy and globalized countries, are among the most vulnerable to external crises. The temporal analysis for the period 1986-2011 reveals that the global average vulnerability has strongly increased over time, confirming the increased exposure of countries to external crises which may occur in the virtual water trade network.

  15. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  16. ENHANCED PROVISIONING ALGORITHM FOR VIRTUAL PRIVATE NETWORK IN HOSE MODEL WITH QUALITY OF SERVICE SUPPORT USING WAXMAN MODEL

    Directory of Open Access Journals (Sweden)

    R. Ravi

    2011-03-01

    Full Text Available As Internet usage grows exponentially, network security issues become increasingly important. Network security measures are needed to protect data during transmission. Various security controls are used to prevent the access of hackers in networks. They are firewall, virtual private networks and encryption algorithms. Out of these, the virtual private network plays a vital role in preventing hackers from accessing the networks. A Virtual Private Network (VPN provides end users with a way to privately access information on their network over a public network infrastructure such as the internet. Using a technique called “Tunneling”, data packets are transmitted across a public routed network, such as the internet that simulates a point-to-point connection. Virtual private networks provide customers with a secure and low-cost communication environment. The basic structure of the virtual circuit is to create a logical path from the source port to the destination port. This path may incorporate many hops between routers for the formation of the circuit. The final, logical path or virtual circuit acts in the same way as a direct connection between the two ports. The K-Cost Optimized Delay Satisfied Virtual Private Networks Tree Provisioning Algorithm connects VPN nodes using a tree structure and attempts to optimize the total bandwidth reserved on the edges of the VPN tree that satisfies the delay requirement. It also allows sharing the bandwidth on the links to improve the performance. The proposed KCDVT algorithm computes the optimal VPN Tree. The performance analysis of the proposed algorithm is carried out in terms of cost, the number of nodes, and the number of VPN nodes, delay, asymmetric ratio and delay with constraints with Breadth First Search Algorithm. The KCDVT performs better than the Breadth First Search Algorithm.

  17. The production route selection algorithm in virtual manufacturing networks

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  18. REDES SOCIALES VIRTUALES Y LA BIOÉTICA VIRTUAL SOCIAL NETWORKS AND BIOETHICS

    Directory of Open Access Journals (Sweden)

    Jorge Arturo Pérez Pérez

    2011-06-01

    Full Text Available RESUMEN:La investigación tiende a indagar por las condiciones psicológicas (obsesivas, socio – familiares, fisiológicas y emocionales de los estudiantes de la Universidad de San Buenaventura –Medellín de mes de agosto de 2010, que tienden a destinar parte de su tiempo a las Redes Sociales Virtuales y hacer una reflexión desde la Bioética frente al tema.Abstract:This piece of research tends to inquire into the (obsessive psychological, socio-family, physiological, and emotional conditions of the students at Saint Bonaventure University, Medellin Branch, back in the month of August 2010, who have the tendency to spend their free time in the Virtual Social Networks and then make an analysis of this issue, from the Bioethical viewpoint.

  19. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  20. Wireless virtualization

    CERN Document Server

    Wen, Heming; Le-Ngoc, Tho

    2013-01-01

    This SpringerBriefs is an overview of the emerging field of wireless access and mobile network virtualization. It provides a clear and relevant picture of the current virtualization trends in wireless technologies by summarizing and comparing different architectures, techniques and technologies applicable to a future virtualized wireless network infrastructure. The readers are exposed to a short walkthrough of the future Internet initiative and network virtualization technologies in order to understand the potential role of wireless virtualization in the broader context of next-generation ubiq

  1. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  2. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  3. Controls of the U.S. Virtual Water Transfer Network

    Science.gov (United States)

    Garcia, S.; Mejia, A.

    2017-12-01

    A complex interplay of human and natural factors shape the economic geography of the U.S., operating through socioeconomic forces that drive the consumption, production, and exchange of commodities. The virtual water content of a commodity represents the water embedded in its production. This work investigates the controls of national bilateral transfers of the virtual water transfer network (VWTN), through a gravity-type spatial interaction model. We use a probabilistic model to predict the binary network and investigate whether the gravity model can explain the topological properties of the empirical weighted network. In general, the gravity model relates transfer flows to the mass of the trading regions and their geographical distance. We hypothesize that properties of the nodes such as population, employment, and availability of land, together with the Euclidean distance between two trading regions, capture the main drivers of the national VWTN. The results from the model are then compared to the empirical weighted network to verify its ability to model the structure of this self-organized system. The proposed empirical model provides insight into the processes that underlie the formation of the VWTN. It can be a promising tool to study how flows are affected by changes in the generating conditions due to shocks and/or stresses.

  4. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications

    Science.gov (United States)

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-01-01

    Many researchers are devoting attention to the so-called “Internet of Things” (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user’s demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology. PMID:27548177

  5. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications.

    Science.gov (United States)

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-08-19

    Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.

  6. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  7. Developing Simulated Cyber Attack Scenarios Against Virtualized Adversary Networks

    Science.gov (United States)

    2017-03-01

    enclave, as shown in Figure 11, is a common design for many secure networks. Different variations of a cyber-attack scenario can be rehearsed based...achieved a greater degree of success against multiple variations of an enemy network. E. ATTACK TYPES A primary goal of this thesis is to define and...2013. [33] R. Goldberg , “Architectural principles for virtual computer systems,” Ph.D. dissertation, Dept. of Comp. Sci., Harvard Univ., Cambridge

  8. VIRTUAL NETWORK COMMUNICATION AND ITS IMPACT ON INTERPERSONAL RELATIONS

    Directory of Open Access Journals (Sweden)

    Сергей Николаевич Хуторной

    2013-08-01

    Full Text Available The Internet was playing an increasing role in human life. Increasingly popular sites, where visitors can interact with other visitors through this site. Becoming popular communication "online", partially displacing the real-мире. There arises a problem of Internet addiction, or dependence on the Internet, which includes not only the dependence on virtual communication in social networks, but also addiction to gambling, online games, electronic purchases, and so on. Virtual reality acts not only as an intermediary virtual communication, but also significantly affect the nature, means and methods of communication, which ultimately has a significant effect on identity, often negatively, transforming it. The article dedicated to the analysis  of intercourse in virtual reality. The specification of net communication in comparing of real social communication is researching. The concept of Internet addicion is examined.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-9

  9. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  10. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  11. NRS : a system for automated network virtualization in IaaS cloud infrastructures

    NARCIS (Netherlands)

    Theodorou, D.; Mak, R.H.; Keijser, J.J.; Suerink, T.

    2013-01-01

    Applications running in multi-tenant IaaS clouds increasingly require networked compute resources, which may belong to several clouds hosted in multiple data-centers. To accommodate these applications network virtualization is necessary, not only for isolation between tenants, but also for

  12. The Connect Effect Building Strong Personal, Professional, and Virtual Networks

    CERN Document Server

    Dulworth, Michael

    2008-01-01

    Entrepreneur and executive development expert Mike Dulworth's THE CONNECT EFFECT provides readers with a simple framework and practical tools for developing that crucial competitive advantage: a high-quality personal, professional/organizational and virtual network.

  13. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  14. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  15. WebVR——Web Virtual Reality Engine Based on P2P network

    OpenAIRE

    zhihan LV; Tengfei Yin; Yong Han; Yong Chen; Ge Chen

    2011-01-01

    WebVR, a multi-user online virtual reality engine, is introduced. The main contributions are mapping the geographical space and virtual space to the P2P overlay network space, and dividing the three spaces by quad-tree method. The geocoding is identified with Hash value, which is used to index the user list, terrain data, and the model object data. Sharing of data through improved Kademlia network model is designed and implemented. In this model, XOR algorithm is used to calculate the distanc...

  16. Cloud and virtual data storage networking

    CERN Document Server

    Schulz, Greg

    2011-01-01

    The amount of data being generated, processed, and stored has reached unprecedented levels. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased. Consequently, IT organizations are looking to do more with what they have while supporting growth along with new services without compromising on cost and service delivery. Cloud and Virtual Data Storage Networking, by savvy IT industry veteran Greg Schulz, looks at converging IT resources and management technologies for facilitating efficie

  17. Virtual File System Mounting amp Searching With Network JVM For LAN

    Directory of Open Access Journals (Sweden)

    Nikita Kamble

    2015-08-01

    Full Text Available Computer technology has rapidly grown over past decades. Most of this can be attributed to the Internet as many computers now have a need to be networked together to establish an online connection. A local area network is a group of computers and associated devices that share a common communication line or wireless link to the service. Typically a LAN compasses computers and peripherals connected to a secure server within a small geographic area such as an office building or home computer and other mobile devices that share resources such as printer or network storage. A LAN is contrasted in principle to a wide area networkWANwhich covers a larger geographic distance and may involve leased telecom circuits while the media for LANs are locally managed. Ethernet are twisted pair cabling amp Wi-Fi are the two most common transmission technologies in use for LAN. The rise of virtualization has fueled the development of virtual LANWLANwhich allows network administrator to logically group network nodes amp partition their networks without the need for major infrastructure changes. In some situations a wireless LAN or Wi-Fi maybe preferable to a wired LAN because of its flexibility amp cost. Companies are asserting WLANs as a replacement for their wired infrastructure as the number of smart phones tablets amp other mobile devices proliferates.

  18. Constructing Social Networks from Unstructured Group Dialog in Virtual Worlds

    Science.gov (United States)

    Shah, Fahad; Sukthankar, Gita

    Virtual worlds and massively multi-player online games are rich sources of information about large-scale teams and groups, offering the tantalizing possibility of harvesting data about group formation, social networks, and network evolution. However these environments lack many of the cues that facilitate natural language processing in other conversational settings and different types of social media. Public chat data often features players who speak simultaneously, use jargon and emoticons, and only erratically adhere to conversational norms. In this paper, we present techniques for inferring the existence of social links from unstructured conversational data collected from groups of participants in the Second Life virtual world. We present an algorithm for addressing this problem, Shallow Semantic Temporal Overlap (SSTO), that combines temporal and language information to create directional links between participants, and a second approach that relies on temporal overlap alone to create undirected links between participants. Relying on temporal overlap is noisy, resulting in a low precision and networks with many extraneous links. In this paper, we demonstrate that we can ameliorate this problem by using network modularity optimization to perform community detection in the noisy networks and severing cross-community links. Although using the content of the communications still results in the best performance, community detection is effective as a noise reduction technique for eliminating the extra links created by temporal overlap alone.

  19. A Deployment Scheme Based Upon Virtual Force for Directional Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chiu-Kuo Liang

    2015-11-01

    Full Text Available A directional sensor network is composed of many directional sensor nodes. Unlike conventional omni-directional sensors that always have an omni-angle of sensing range; directional sensors may have a limited angle of sensing range due to technical constraints or cost considerations. Area coverage is still an essential issue in a directional sensor network. In this paper, we study the area coverage problem in directional sensor networks with mobile sensors, which can move to the correct places to get high coverage. We present distributed self-deployment schemes of mobile sensors. After sensors are randomly deployed, each sensor calculates its next new location to move in order to obtain a better coverage than previous one. The locations of sensors are adjusted round by round so that the coverage is gradually improved. Based on the virtual force of the directional sensors, we design a scheme, namely Virtual force scheme. Simulation results show the effectiveness of our scheme in term of the coverage improvement.

  20. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  1. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  2. Smart grids : combination of 'Virtual Power Plant'-concept and 'smart network'-design

    NARCIS (Netherlands)

    El Bakari, K.; Kling, W.L.

    2010-01-01

    The concept of virtual power plant (VPP) offers a solution to control and manage higher level of dispersed generation in nowadays passive distribution network. Under certain conditions the VPP is able to displace power and energy which implies more control on the energy flow in the networks. To

  3. Virtual Fiber Networking and Impact of Optical Path Grooming on Creating Efficient Layer One Services

    Science.gov (United States)

    Naruse, Fumisato; Yamada, Yoshiyuki; Hasegawa, Hiroshi; Sato, Ken-Ichi

    This paper presents a novel “virtual fiber” network service that exploits wavebands. This service provides virtual direct tunnels that directly convey wavelength paths to connect customer facilities. To improve the resource utilization efficiency of the service, a network design algorithm is developed that can allow intermediate path grooming at limited nodes and can determine the best node location. Numerical experiments demonstrate the effectiveness of the proposed service architecture.

  4. Creation of a virtual antidotes network between pharmacy departments of catalan hospitals

    Directory of Open Access Journals (Sweden)

    Raquel Aguilar-Salmerón

    2017-05-01

    Full Text Available Objetive: To design a virtual antidote network between hospitals that could help to locate on-line those hospitals that stocked those antidotes with the highest difficulty in terms of availability, and ensured that the medication was loaned in case of necessity.Methods: The application was developed by four hospital pharmacists and two clinical toxicologists with the support of a Healthcare Informatics Consultant Company.Results: The antidotes network in Catalonia, Spain, was launched in July 2015. It can be accessed through the platform: www.xarxaantidots.org. The application has an open area with overall information about the project and the option to ask toxicological questions of non-urgent nature. The private area is divided into four sections: 1 Antidotes: data of interest about the 15 antidotes included in the network and their recommended stock depending on the complexity of the hospital, 2 Antidote stock management: virtual formulary, 3 Loans: location of antidotes through the on-line map application Google Maps, and virtual loan request, and 4 Documentation: As of June, 2016, 40 public and private hospitals have joined the network, from all four provinces of Catalonia, which have accessed the private area 2 102 times, requested two loans of silibinin, one of hydroxocobalamin, three of antiophidic serum and three of botulism antitoxin. Thirteen toxicological consultations have been received.Conclusions: The implementation of this network improves the communication between centers that manage poisoned patients, adapts and standardizes the stock of antidotes in hospitals, speeds up loans if necessary, and improves the quality of care for poisoned patients.

  5. Simulasi Virtual Local Area Network (VLAN Berbasis Software Defined Network (SDN Menggunakan POX Controller

    Directory of Open Access Journals (Sweden)

    Rohmat Tulloh

    2015-11-01

    Full Text Available VLAN (Virtual LAN merupakan sebuah teknologi yang dapat mengkonfigurasi jaringan logis independen dari struktur jaringan fisik. Hasil dari penelitian sebelumnya sudah diprediksi bahwa dibutuhkan Virtual Network yang akhirnya terciptalah VLAN. Namun paradigma jaringan saat ini tidak flexible, ketergantungan terhadap vendor sangat besar karena fungsi data plane dan control plane berada dalam satu paket device. SDN (Software defined network yang merupakan salahsatu evolusi teknologi jaringan sesuai dengan tuntutan yang berkembang dimana memisahkan fungsi data plane dan control plane pada suatu perangkat. POX Controller digunakan untuk men-simulasikan dan menguji Platform SDN (Software defined network. Pada penelitian ini menggunakan Openflow versi 1.0 untuk memasang header VLAN sehingga penelitian ini difokuskan untuk mengevaluasi performa forwarding VLAN yang memanfaatkan Openflow sebagai control plane dapat berfungsi dengan baik. Hasil penelitian ini mengusulkan penerapan karakteristik teknologi VLAN pada SDN karena telah berjalan dengan benar sesuai hasil pengujian konektifitas, verifikasi dan keamanan. Kemudian hasil pengujian lanjutan untuk melihat pengaruh SDN dengan skenario penambahan jumlah VLAN ID didapatkan bahwa set-up time akan bertambah seiring meningkatnya jumlah host dan dengan menggunakan protokol OpenFlow, latency yang terjadi di jaringan dapat dipantau dengan parameter round trip time (RTT yang stabil direntang 0,2 sampai 6 second walaupun jumlah vlan_id dan background traffic bertambah.

  6. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  7. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    Science.gov (United States)

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  8. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  9. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  10. Temporal dynamics of blue and green virtual water trade networks

    Science.gov (United States)

    Konar, M.; Dalin, C.; Hanasaki, N.; Rinaldo, A.; Rodriguez-Iturbe, I.

    2012-12-01

    Global food security increasingly relies on the trade of food commodities. Freshwater resources are essential to agricultural production and are thus embodied in the trade of food commodities, referred to as "virtual water trade." Agricultural production predominantly relies on rainwater (i.e., "green water"), though irrigation (i.e., "blue water") does play an important role. These different sources of water have distinctly different opportunity costs, which may be reflected in the way these resources are traded. Thus, the temporal dynamics of the virtual water trade networks from these distinct water sources require characterization. We find that 42 × 109 m3 blue and 310 × 109 m3 green water was traded in 1986, growing to 78 × 109 m3 blue and 594 × 109 m3 green water traded in 2008. Three nations dominate the export of green water resources: the USA, Argentina, and Brazil. As a country increases its export trade partners it tends to export relatively more blue water. However, as a country increases its import trade partners it does not preferentially import water from a specific source. The amount of virtual water that a country imports by increasing its import trade partners has been decreasing over time, with the exception of the soy trade. Both blue and green virtual water networks are efficient: 119 × 109 m3 blue and 105 × 109 m3 green water were saved in 2008. Importantly, trade has been increasingly saving water over time, due to the intensification of crop trade on more water-efficient links.

  11. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  12. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    Science.gov (United States)

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  13. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  14. Network worlds : from link analysis to virtual places.

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C. (Cliff)

    2002-01-01

    Significant progress is being made in knowledge systems through recent advances in the science of very large networks. Attention is now turning in many quarters to the potential impact on counter-terrorism methods. After reviewing some of these advances, we will discuss the difference between such 'network analytic' approaches, which focus on large, homogeneous graph strucures, and what we are calling 'link analytic' approaches, which focus on somewhat smaller graphs with heterogeneous link types. We use this venue to begin the process of rigorously defining link analysis methods, especially the concept of chaining of views of multidimensional databases. We conclude with some speculation on potential connections to virtual world architectures.

  15. Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-01-01

    Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.

  16. SDN-enabled OPS with QoS guarantee for reconfigurable virtual data center networks

    NARCIS (Netherlands)

    Miao, W.; Agraz, F.; Peng, S.; Spadaro, S.; Bernini, G.; Perelló, J.; Zervas, G.; Nejabati, R.; Ciulli, Nicola; Simeonidou, D.; Dorren, H.; Calabretta, N.

    2015-01-01

    Optical packet switching (OPS) can enhance the performance of data center networks (DCNs)by providing fast and large-capacity switching capability. Benefiting from the software-defined networking (SDN) control plane, which could update the look-up-table (LUT) of the OPS, virtual DCNs can be flexibly

  17. ANCS: Achieving QoS through Dynamic Allocation of Network Resources in Virtualized Clouds

    Directory of Open Access Journals (Sweden)

    Cheol-Ho Hong

    2016-01-01

    Full Text Available To meet the various requirements of cloud computing users, research on guaranteeing Quality of Service (QoS is gaining widespread attention in the field of cloud computing. However, as cloud computing platforms adopt virtualization as an enabling technology, it becomes challenging to distribute system resources to each user according to the diverse requirements. Although ample research has been conducted in order to meet QoS requirements, the proposed solutions lack simultaneous support for multiple policies, degrade the aggregated throughput of network resources, and incur CPU overhead. In this paper, we propose a new mechanism, called ANCS (Advanced Network Credit Scheduler, to guarantee QoS through dynamic allocation of network resources in virtualization. To meet the various network demands of cloud users, ANCS aims to concurrently provide multiple performance policies; these include weight-based proportional sharing, minimum bandwidth reservation, and maximum bandwidth limitation. In addition, ANCS develops an efficient work-conserving scheduling method for maximizing network resource utilization. Finally, ANCS can achieve low CPU overhead via its lightweight design, which is important for practical deployment.

  18. Application of the dynamically allocated virtual clustering management system to emulated tactical network experimentation

    Science.gov (United States)

    Marcus, Kelvin

    2014-06-01

    The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.

  19. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  20. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  1. So ware-Defined Network Solutions for Science Scenarios: Performance Testing Framework and Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Settlemyer, Bradley [Los Alamos National Laboratory (LANL); Kettimuthu, R. [Argonne National Laboratory (ANL); Boley, Josh [Argonne National Laboratory (ANL); Katramatos, Dimitrios [Brookhaven National Laboratory (BNL); Rao, Nageswara S. [ORNL; Sen, Satyabrata [ORNL; Liu, Qiang [ORNL

    2018-01-01

    High-performance scientific work flows utilize supercomputers, scientific instruments, and large storage systems. Their executions require fast setup of a small number of dedicated network connections across the geographically distributed facility sites. We present Software-Defined Network (SDN) solutions consisting of site daemons that use dpctl, Floodlight, ONOS, or OpenDaylight controllers to set up these connections. The development of these SDN solutions could be quite disruptive to the infrastructure, while requiring a close coordination among multiple sites; in addition, the large number of possible controller and device combinations to investigate could make the infrastructure unavailable to regular users for extended periods of time. In response, we develop a Virtual Science Network Environment (VSNE) using virtual machines, Mininet, and custom scripts that support the development, testing, and evaluation of SDN solutions, without the constraints and expenses of multi-site physical infrastructures; furthermore, the chosen solutions can be directly transferred to production deployments. By complementing VSNE with a physical testbed, we conduct targeted performance tests of various SDN solutions to help choose the best candidates. In addition, we propose a switching response method to assess the setup times and throughput performances of different SDN solutions, and present experimental results that show their advantages and limitations.

  2. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  3. Secure transfer of surveillance data over Internet using Virtual Private Network technology. Field trial between STUK and IAEA

    International Nuclear Information System (INIS)

    Smartt, H.; Martinez, R.; Caskey, S.; Honkamaa, T.; Ilander, T.; Poellaenen, R.; Jeremica, N.; Ford, G.

    2000-01-01

    One of the primary concerns of employing remote monitoring technologies for IAEA safeguards applications is the high cost of data transmission. Transmitting data over the Internet has been shown often to be less expensive than other data transmission methods. However, data security of the Internet is often considered to be at a low level. Virtual Private Networks has emerged as a solution to this problem. A field demonstration was implemented to evaluate the use of Virtual Private Networks (via the Internet) as a means for data transmission. Evaluation points included security, reliability and cost. The existing Finnish Remote Environmental Monitoring System, located at the STUK facility in Helsinki, Finland, served as the field demonstration system. Sandia National Laboratories (SNL) established a Virtual Private Network between STUK (Radiation and Nuclear Safety Authority) Headquarters in Helsinki, Finland, and IAEA Headquarters in Vienna, Austria. Data from the existing STUK Remote Monitoring System was viewed at the IAEA via this network. The Virtual Private Network link was established in a proper manner, which guarantees the data security. Encryption was verified using a network sniffer. No problems were? encountered during the test. In the test system, fixed costs were higher than in the previous system, which utilized telephone lines. On the other hand transmission and operating costs are very low. Therefore, with low data amounts, the test system is not cost-effective, but if the data amount is tens of Megabytes per day the use of Virtual Private Networks and Internet will be economically justifiable. A cost-benefit analysis should be performed for each site due to significant variables. (orig.)

  4. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks

    Directory of Open Access Journals (Sweden)

    Wenquan Jin

    2018-05-01

    Full Text Available The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  5. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    Science.gov (United States)

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  6. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  7. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  8. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  9. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  10. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  11. On-demand virtual optical network access using 100 Gb/s Ethernet.

    Science.gov (United States)

    Ishida, Osamu; Takamichi, Toru; Arai, Sachine; Kawate, Ryusuke; Toyoda, Hidehiro; Morita, Itsuro; Araki, Soichiro; Ichikawa, Toshiyuki; Hoshida, Takeshi; Murai, Hitoshi

    2011-12-12

    Our Terabit LAN initiatives attempt to enhance the scalability and utilization of lambda resources. This paper describes bandwidth-on-demand virtualized 100GE access to WDM networks on a field fiber test-bed using multi-domain optical-path provisioning. © 2011 Optical Society of America

  12. TinCan: User-Defined P2P Virtual Network Overlays for Ad-hoc Collaboration

    Directory of Open Access Journals (Sweden)

    Pierre St Juste

    2014-10-01

    Full Text Available Virtual private networking (VPN has become an increasingly important component of a collaboration environment because it ensures private, authenticated communication among participants, using existing collaboration tools, where users are distributed across multiple institutions and can be mobile. The majority of current VPN solutions are based on a centralized VPN model, where all IP traffic is tunneled through a VPN gateway. Nonetheless, there are several use case scenarios that require a model where end-to-end VPN links are tunneled upon existing Internet infrastructure in a peer-to-peer (P2P fashion, removing the bottleneck of a centralized VPN gateway. We propose a novel virtual network — TinCan — based on peerto-peer private network tunnels. It reuses existing standards and implementations of services for discovery notification (XMPP, reflection (STUN and relaying (TURN, facilitating configuration. In this approach, trust relationships maintained by centralized (or federated services are automatically mapped to TinCan links. In one use scenario, TinCan allows unstructured P2P overlays connecting trusted end-user devices — while only requiring VPN software on user devices and leveraging online social network (OSN infrastructure already widely deployed. This paper describes the architecture and design of TinCan and presents an experimental evaluation of a prototype supporting Windows, Linux, and Android mobile devices. Results quantify the overhead introduced by the network virtualization layer, and the resource requirements imposed on services needed to bootstrap TinCan links.

  13. State-of-the-art on Virtualization and Software Defined Networking for Efficient Resource Allocation on Multi-tenant 5G Networks

    Directory of Open Access Journals (Sweden)

    Tsirakis Christos

    2017-01-01

    Full Text Available Global data traffic explosion is expected to set stringent requirements for next generation networks in the next decades. Besides, very low latencies will have to be guaranteed for enabling new delay critical services. However, current Software Defined Networking (SDN solutions have limitations in terms of separating both data and control planes among tenants/operators, and the capability to adapt to new or changing requirements. Moreover, some virtualization schemes do not ensure isolation of resources and do not guarantee bandwidth across the entities. While some others fail to provide flexibility to the slices to customize the resource allocation across the users. Therefore, novel SDN and virtualization techniques should be implemented to realize the upcoming 5G network that will facilitate at least efficient resource allocation and multi-tenancy among the plethora of different requirements.

  14. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  15. Constructing Battery-Aware Virtual Backbones in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Yuanyuan

    2007-01-01

    Full Text Available A critical issue in battery-powered sensor networks is to construct energy efficient virtual backbones for network routing. Recent study in battery technology reveals that batteries tend to discharge more power than needed and reimburse the over-discharged power if they are recovered. In this paper we first provide a mathematical battery model suitable for implementation in sensor networks. We then introduce the concept of battery-aware connected dominating set (BACDS and show that in general the minimum BACDS (MBACDS can achieve longer lifetime than the previous backbone structures. Then we show that finding a MBACDS is NP-hard and give a distributed approximation algorithm to construct the BACDS. The resulting BACDS constructed by our algorithm is at most opt size, where is the maximum node degree and opt is the size of an optimal BACDS. Simulation results show that the BACDS can save a significant amount of energy and achieve up to longer network lifetime than previous schemes.

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. Saving Money and Time with Virtual Server

    CERN Document Server

    Sanders, Chris

    2006-01-01

    Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of new users are able to experience what the power of virtualization can do for their networks. This guide is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity. It contains information on setting up a virtual network, virtual consolidation, virtual security, virtual honeypo

  18. Building a sense of virtual community: the role of the features of social networking sites.

    Science.gov (United States)

    Chen, Chi-Wen; Lin, Chiun-Sin

    2014-07-01

    In recent years, social networking sites have received increased attention because of the potential of this medium to transform business by building virtual communities. However, theoretical and empirical studies investigating how specific features of social networking sites contribute to building a sense of virtual community (SOVC)-an important dimension of a successful virtual community-are rare. Furthermore, SOVC scales have been developed, and research on this issue has been called for, but few studies have heeded this call. On the basis of prior literature, this study proposes that perceptions of the three most salient features of social networking sites-system quality (SQ), information quality (IQ), and social information exchange (SIE)-play a key role in fostering SOVC. In particular, SQ is proposed to increase IQ and SIE, and SIE is proposed to enhance IQ, both of which thereafter build SOVC. The research model was examined in the context of Facebook, one of the most popular social networking sites in the world. We adopted Blanchard's scales to measure SOVC. Data gathered using a Web-based questionnaire, and analyzed with partial least squares, were utilized to test the model. The results demonstrate that SIE, SQ, and IQ are the factors that form SOVC. The findings also suggest that SQ plays a fundamental role in supporting SIE and IQ in social networking sites. Implications for theory, practice, and future research directions are discussed.

  19. To Enhance Collaborative Learning and Practice Network Knowledge with a Virtualization Laboratory and Online Synchronous Discussion

    Directory of Open Access Journals (Sweden)

    Wu-Yuin Hwang

    2014-09-01

    Full Text Available Recently, various computer networking courses have included additional laboratory classes in order to enhance students’ learning achievement. However, these classes need to establish a suitable laboratory where each student can connect network devices to configure and test functions within different network topologies. In this case, the Linux operating system can be used to operate network devices and the virtualization technique can include multiple OSs for supporting a significant number of students. In previous research, the virtualization application was successfully applied in a laboratory, but focused only on individual assignments. The present study extends previous research by designing the Networking Virtualization-Based Laboratory (NVBLab, which requires collaborative learning among the experimental students. The students were divided into an experimental group and a control group for the experiment. The experimental group performed their laboratory assignments using NVBLab, whereas the control group completed them on virtual machines (VMs that were installed on their personal computers. Moreover, students using NVBLab were provided with an online synchronous discussion (OSD feature that enabled them to communicate with others. The laboratory assignments were divided into two parts: Basic Labs and Advanced Labs. The results show that the experimental group significantly outperformed the control group in two Advanced Labs and the post-test after Advanced Labs. Furthermore, the experimental group’s activities were better than those of the control group based on the total average of the command count per laboratory. Finally, the findings of the interviews and questionnaires with the experimental group reveal that NVBLab was helpful during and after laboratory class.

  20. Network Analysis of a Virtual Community of Learning of Economics Educators

    Science.gov (United States)

    Fontainha, Elsa; Martins, Jorge Tiago; Vasconcelos, Ana Cristina

    2015-01-01

    Introduction: This paper aims at understanding virtual communities of learning in terms of dynamics, types of knowledge shared by participants, and network characteristics such as size, relationships, density, and centrality of participants. It looks at the relationships between these aspects and the evolution of communities of learning. It…

  1. SDN/NFV orchestration for dynamic deployment of virtual SDN controllers as VNF for multi-tenant optical networks

    OpenAIRE

    Muñoz, Raül; Vilalta, Ricard; Casellas, Ramon; Martínez, Ricardo; Szyrkowiec, T.; Autenrieth, A.; López, Víctor; López, D.

    2015-01-01

    We propose to virtualize the SDN control functions and move them to the cloud. We experimentally evaluate the first SDN/NFV orchestration architecture to dynamically deploy independent SDN controller instances for each deployed virtual optical network.

  2. Three Dimensional Virtual Environments as a Tool for Development of Personal Learning Networks

    Directory of Open Access Journals (Sweden)

    Aggeliki Nikolaou

    2013-01-01

    Full Text Available Technological advances have altered how, where, when, and what information is created, presented and diffused in working and social environments as well as how learners interact with that information. Virtual worlds constitute an emerging realm for collaborative play, learning and work. This paper describes how virtual worlds provide a mechanism to facilitate the creation and development of Personal Learning Networks. This qualitative investigation focuses on the role of three-dimensional virtual environments (3DVEs in the creation and development of Personal Learning Networks (PLNs. More specifically, this work investigates the reasons that drive members of Education Orientated Groups (hereafter “Groups” in Second Life (SL, to adopt a technological innovation as a milieu of learning, the ways they use it and the types of learning that are occurring in it. The authors also discuss the collaborative and social characteristics of these environments which, provide access to excellence of a specific area of interest and promote innovative ideas on a global scale, through sharing educational resources and developing good educational practices without spatial and temporal constraints.

  3. A Social Network Analysis of Teaching and Research Collaboration in a Teachers' Virtual Learning Community

    Science.gov (United States)

    Lin, Xiaofan; Hu, Xiaoyong; Hu, Qintai; Liu, Zhichun

    2016-01-01

    Analysing the structure of a social network can help us understand the key factors influencing interaction and collaboration in a virtual learning community (VLC). Here, we describe the mechanisms used in social network analysis (SNA) to analyse the social network structure of a VLC for teachers and discuss the relationship between face-to-face…

  4. Virtual resistive network and conductivity reconstruction with Faraday's law

    International Nuclear Information System (INIS)

    Lee, Min Gi; Ko, Min-Su; Kim, Yong-Jung

    2014-01-01

    A network-based conductivity reconstruction method is introduced using the third Maxwell equation, or Faraday's law, for a static case. The usual choice in electrical impedance tomography is the divergence-free equation for the electrical current density. However, if the electrical current density is given, the curl-free equation for the electrical field gives a direct relation between the current and the conductivity and this relation is used in this paper. Mimetic discretization is applied to the equation, which gives the virtual resistive network system. Properties of the numerical schemes introduced are investigated and their advantages over other conductivity reconstruction methods are discussed. Numerically simulated results, with an analysis of noise propagation, are presented. (paper)

  5. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  6. Virtual optical network provisioning with unified service logic processing model for software-defined multidomain optical networks

    Science.gov (United States)

    Zhao, Yongli; Li, Shikun; Song, Yinan; Sun, Ji; Zhang, Jie

    2015-12-01

    Hierarchical control architecture is designed for software-defined multidomain optical networks (SD-MDONs), and a unified service logic processing model (USLPM) is first proposed for various applications. USLPM-based virtual optical network (VON) provisioning process is designed, and two VON mapping algorithms are proposed: random node selection and per controller computation (RNS&PCC) and balanced node selection and hierarchical controller computation (BNS&HCC). Then an SD-MDON testbed is built with OpenFlow extension in order to support optical transport equipment. Finally, VON provisioning service is experimentally demonstrated on the testbed along with performance verification.

  7. Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Fosgerau, Anders; Hansen, Peter Søren Kirk

    1999-01-01

    The initial design considerations and research goals for an ATM network based virtual seminar room with 5 sites are presented.......The initial design considerations and research goals for an ATM network based virtual seminar room with 5 sites are presented....

  8. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  9. Artificial neural networks and neuro-fuzzy inference systems as virtual sensors for hydrogen safety prediction

    Energy Technology Data Exchange (ETDEWEB)

    Karri, Vishy; Ho, Tien [School of Engineering, University of Tasmania, GPO Box 252-65, Hobart, Tasmania 7001 (Australia); Madsen, Ole [Department of Production, Aalborg University, Fibigerstraede 16, DK-9220 Aalborg (Denmark)

    2008-06-15

    Hydrogen is increasingly investigated as an alternative fuel to petroleum products in running internal combustion engines and as powering remote area power systems using generators. The safety issues related to hydrogen gas are further exasperated by expensive instrumentation required to measure the percentage of explosive limits, flow rates and production pressure. This paper investigates the use of model based virtual sensors (rather than expensive physical sensors) in connection with hydrogen production with a Hogen 20 electrolyzer system. The virtual sensors are used to predict relevant hydrogen safety parameters, such as the percentage of lower explosive limit, hydrogen pressure and hydrogen flow rate as a function of different input conditions of power supplied (voltage and current), the feed of de-ionized water and Hogen 20 electrolyzer system parameters. The virtual sensors are developed by means of the application of various Artificial Intelligent techniques. To train and appraise the neural network models as virtual sensors, the Hogen 20 electrolyzer is instrumented with necessary sensors to gather experimental data which together with MATLAB neural networks toolbox and tailor made adaptive neuro-fuzzy inference systems (ANFIS) were used as predictive tools to estimate hydrogen safety parameters. It was shown that using the neural networks hydrogen safety parameters were predicted to less than 3% of percentage average root mean square error. The most accurate prediction was achieved by using ANFIS. (author)

  10. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  11. Virtualization in network and servers infrastructure to support dynamic system reconfiguration in ALMA

    Science.gov (United States)

    Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian

    2012-09-01

    ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.

  12. Intra and Inter-PON ONU to ONU Virtual Private Networking using OFDMA in a Ring Topology

    DEFF Research Database (Denmark)

    Deng, Lei; Zhao, Ying; Pang, Xiaodan

    2011-01-01

    Abstract—In this paper, we propose a novel WDM-PON architecture to support efficient and bandwidth-scalable virtual private network (VPN) emulation over both inter-PON and intra- PON. The virtual ring link for the VPN communications among ONUs is realized by using additionally low-cost optical pa...

  13. Investigating Factors Related to Virtual Private Network Adoption in Small Businesses

    Science.gov (United States)

    Lederer, Karen

    2012-01-01

    The purpose of this quantitative study was to investigate six factors that may influence adoption of virtual private network (VPN) technologies in small businesses with fewer than 100 employees. Prior research indicated small businesses employing fewer than 100 workers do not adopt VPN technology at the same rate as larger competitors, and the…

  14. Constructing Battery-Aware Virtual Backbones in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chi Ma

    2007-05-01

    Full Text Available A critical issue in battery-powered sensor networks is to construct energy efficient virtual backbones for network routing. Recent study in battery technology reveals that batteries tend to discharge more power than needed and reimburse the over-discharged power if they are recovered. In this paper we first provide a mathematical battery model suitable for implementation in sensor networks. We then introduce the concept of battery-aware connected dominating set (BACDS and show that in general the minimum BACDS (MBACDS can achieve longer lifetime than the previous backbone structures. Then we show that finding a MBACDS is NP-hard and give a distributed approximation algorithm to construct the BACDS. The resulting BACDS constructed by our algorithm is at most (8+Δopt size, where Δ is the maximum node degree and opt is the size of an optimal BACDS. Simulation results show that the BACDS can save a significant amount of energy and achieve up to 30% longer network lifetime than previous schemes.

  15. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  16. Classification of EMG signals using artificial neural networks for virtual hand prosthesis control.

    Science.gov (United States)

    Mattioli, Fernando E R; Lamounier, Edgard A; Cardoso, Alexandre; Soares, Alcimar B; Andrade, Adriano O

    2011-01-01

    Computer-based training systems have been widely studied in the field of human rehabilitation. In health applications, Virtual Reality presents itself as an appropriate tool to simulate training environments without exposing the patients to risks. In particular, virtual prosthetic devices have been used to reduce the great mental effort needed by patients fitted with myoelectric prosthesis, during the training stage. In this paper, the application of Virtual Reality in a hand prosthesis training system is presented. To achieve this, the possibility of exploring Neural Networks in a real-time classification system is discussed. The classification technique used in this work resulted in a 95% success rate when discriminating 4 different hand movements.

  17. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  18. The Resource Mapping Algorithm of Wireless Virtualized Networks for Saving Energy in Ultradense Small Cells

    Directory of Open Access Journals (Sweden)

    Sai Zou

    2015-01-01

    Full Text Available As the current network is designed for peak loads, it results in insufficient resource utilization and energy waste. Virtualized technology makes it possible that intelligent energy perception network could be deployed and resource sharing could become an effective energy saving technology. How to make more small cells into sleeping state for energy saving in ultradense small cell system has become a research hot spot. Based on the mapping feature of virtualized network, a new wireless resource mapping algorithm for saving energy in ultradense small cells has been put forward when wireless resource amount is satisfied in every small cell. First of all, the method divides the virtual cells. Again through the alternate updating between small cell mapping and wireless resource allocation, least amount of small cells is used and other small cells turn into sleeping state on the premise of guaranteeing users’ QoS. Next, the energy consumption of the wireless access system, wireless resource utilization, and the convergence of the proposed algorithm are analyzed in theory. Finally, the simulation results demonstrate that the algorithm can effectively reduce the system energy consumption and required wireless resource amount under the condition of satisfying users’ QoS.

  19. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  20. Development of virtual private network for JT-60SA CAD integration

    International Nuclear Information System (INIS)

    Oshima, Takayuki; Fujita, Takaaki; Seki, Masami; Kawashima, Hisato; Hoshino, Katsumichi; Shibanuma, Kiyoshi; Verrecchia, M.; Teuchner, B.

    2010-01-01

    The CAD models will be exchanged and integrated at Naka for JT-60SA, a common computer network efficiently connected between Naka site and the Garching site is needed to be established. Virtual Private Network (VPN) was introduced with LAN on computer network physically-separated from JAEA intranet area and firewall. In July 2009, a new VPN connection between the Naka and Garching sites has been successfully demonstrated using IPSec-VPN technology with a commercial and cost-effective firewall/router for security. It was found that the introduction of the Wide Area File Service (WAFS) could solve the issue of the data transmission time and enhance the usability of the VPN for design integration in JT-60SA. (author)

  1. Virtual target tracking (VTT) as applied to mobile satellite communication networks

    Science.gov (United States)

    Amoozegar, Farid

    1999-08-01

    Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.

  2. Structure and relationships within global manufacturing virtual networks

    Directory of Open Access Journals (Sweden)

    José Ramón Vilana

    2009-04-01

    Full Text Available Global Manufacturing Virtual Networks (GMVNs are dynamically changing organizations formed by Original Equipment Manufacturers (OEMs, Contract Manufacturers (CMs, turn-key and component suppliers, R+D centres and distributors. These networks establish a new type of vertical and horizontal relations between independent companies or even competitors where it is not needed to maintain internal manufacturing resources but to manage and share the network resources. The fluid relations that exist within the GMVNs allow them a very permeable organization easy to connect and disconnect from one to each other as well as to choose a set of partners with specific attributes. The result is a highly flexible system characterized by low barriers to entry and exit, geographic flexibility, low costs, rapid technological diffusion, high diversification through contract manufacturers and exceptional economies of scale. Anyhow, there are three major drawbacks in the GMVNs that should be considered at the beginning of this type of collaborations: 1 the risk of contract manufacturers to develop their own end-products in competition with their customers; 2 the technology transfer between competitors OEMs through other members of the GMVN and 3 the lose of process expertise by the OEMs the more they outsource manufacturing processes to the network.

  3. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    Science.gov (United States)

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  4. An evaluation of current high-performance networks

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Christian; Bonachea, Dan; Cote, Yannick; Duell, Jason; Hargrove, Paul; Husbands, Parry; Iancu, Costin; Welcome, Michael; Yelick, Katherine

    2003-01-25

    High-end supercomputers are increasingly built out of commodity components, and lack tight integration between the processor and network. This often results in inefficiencies in the communication subsystem, such as high software overheads and/or message latencies. In this paper we use a set of microbenchmarks to quantify the cost of this commoditization, measuring software overhead, latency, and bandwidth on five contemporary supercomputing networks. We compare the performance of the ubiquitous MPI layer to that of lower-level communication layers, and quantify the advantages of the latter for small message performance. We also provide data on the potential for various communication-related optimizations, such as overlapping communication with computation or other communication. Finally, we determine the minimum size needed for a message to be considered 'large' (i.e., bandwidth-bound) on these platforms, and provide historical data on the software overheads of a number of supercomputers over the past decade.

  5. From physical to virtual: interpersonal relations generating networks among students of a graduate course

    Directory of Open Access Journals (Sweden)

    Roberto Vilmar Satur

    2015-09-01

    Full Text Available Introduction: Nowadays, the social networks are more present in people’s daily lives, especially students, becoming a reality in the educational environment. More than entertainment, these networks have been a valuable interaction tools to passing information through. Objective: In this scenario, the aim of this research is to observe the interpersonal and intragroup interaction abilities in a group of undergraduate students in a public university in order to understand the formation and expansion of social networks initiated through personal contact and extended to the virtual universe. In that sense, it aims specifically at mapping the students interpersonal interactions in the creation of social networks and the expansion of their relations. It describes which are the most used forms of interaction and it gets a basic profile data of the actors. Methodology: To better understand the reality of these subjects it has been adopted as an instrument of data collection, a questionnaire consisting of closed questions directed to students of the course mentioned. A total of 95 student names were enrolled in the course in last May, who could be marked by the respondents. The survey was carried out throughout June 2014 and tallied 71 answered questionnaires. After the data collection, the data were tabulate and it was applied the Gephi software. Results: The results show a tendency to form an extensive network within the course, but it is more intense among certain students, forming small groups and the existence of actors-bridge. The article also showed that there was a clear transposition from the personal relationship contact to the virtual environment. Conclusion: Social networks can increasingly serve as a space for communication and interaction, although the use of these networks in education is related to the teaching and learning process, making advances in the ways of interaction and access to information and search among its users

  6. Grids, Clouds, and Virtualization

    Science.gov (United States)

    Cafaro, Massimo; Aloisio, Giovanni

    This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.

  7. An efficient routing algorithm for event based monitoring in a plant using virtual sink nodes in a wireless sensor network

    International Nuclear Information System (INIS)

    Jain, Sanjay Kumar; Vietla, Srinivas; Roy, D.A.; Biswas, B.B.; Pithawa, C.K.

    2010-01-01

    A Wireless Sensor Network is a collection of wireless sensor nodes arranged in a self-forming network without aid of any infrastructure or administration. The individual nodes have limited resources and hence efficient communication mechanisms between the nodes have to be devised for continued operation of the network in a plant environment. In wireless sensor networks a sink node or base station at one end acts as the recipient of information gathered by all other sensor nodes in the network and the information arrives at the sink through multiple hops across the nodes of the network. A routing algorithm has been developed in which a virtual sink node is generated whenever hop count of an ordinary node crosses a certain specified value. The virtual sink node acts as a recipient node for data of all neighboring nodes. This virtual sink helps in reducing routing overhead, especially when the sensor network is scaled to a larger network. The advantages with this scheme are less energy consumption, reduced congestion in the network and longevity of the network. The above algorithm is suitable for event based or interval based monitoring systems in nuclear plants. This paper describes the working of the proposed algorithm and provides its implementation details. (author)

  8. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  9. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  10. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  11. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  12. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  13. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  14. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  15. Virtual Learning Environments as Sociomaterial Agents in the Network of Teaching Practice

    Science.gov (United States)

    Johannesen, Monica; Erstad, Ola; Habib, Laurence

    2012-01-01

    This article presents findings related to the sociomaterial agency of educators and their practice in Norwegian education. Using actor-network theory, we ask how Virtual Learning Environments (VLEs) negotiate the agency of educators and how they shape their teaching practice. Since the same kinds of VLE tools have been widely implemented…

  16. Design and Test of the Cross-Format Schema Protocol (XFSP) for Networked Virtual Environments

    National Research Council Canada - National Science Library

    Serin, Ekrem

    2003-01-01

    A Networked Virtual Environment (Net-VE) is a distributed software system in which multiple users interact with each other in real time even though these users may be located around the world Zyda 99...

  17. Poverty-Related Diseases College: a virtual African-European network to build research capacity

    NARCIS (Netherlands)

    Dorlo, Thomas P. C.; Fernández, Carmen; Troye-Blomberg, Marita; de Vries, Peter J.; Boraschi, Diana; Mbacham, Wilfred F.

    2016-01-01

    The Poverty-Related Diseases College was a virtual African-European college and network that connected young African and European biomedical scientists working on poverty-related diseases. The aim of the Poverty-Related Diseases College was to build sustainable scientific capacity and international

  18. Game-Based Virtual Worlds as Decentralized Virtual Activity Systems

    Science.gov (United States)

    Scacchi, Walt

    There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).

  19. A framework using cluster-based hybrid network architecture for collaborative virtual surgery.

    Science.gov (United States)

    Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann

    2009-12-01

    Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.

  20. Growing a professional network to over 3000 members in less than 4 years: evaluation of InspireNet, British Columbia's virtual nursing health services research network.

    Science.gov (United States)

    Frisch, Noreen; Atherton, Pat; Borycki, Elizabeth; Mickelson, Grace; Cordeiro, Jennifer; Novak Lauscher, Helen; Black, Agnes

    2014-02-21

    Use of Web 2.0 and social media technologies has become a new area of research among health professionals. Much of this work has focused on the use of technologies for health self-management and the ways technologies support communication between care providers and consumers. This paper addresses a new use of technology in providing a platform for health professionals to support professional development, increase knowledge utilization, and promote formal/informal professional communication. Specifically, we report on factors necessary to attract and sustain health professionals' use of a network designed to increase nurses' interest in and use of health services research and to support knowledge utilization activities in British Columbia, Canada. "InspireNet", a virtual professional network for health professionals, is a living laboratory permitting documentation of when and how professionals take up Web 2.0 and social media. Ongoing evaluation documents our experiences in establishing, operating, and evaluating this network. Overall evaluation methods included (1) tracking website use, (2) conducting two member surveys, and (3) soliciting member feedback through focus groups and interviews with those who participated in electronic communities of practice (eCoPs) and other stakeholders. These data have been used to learn about the types of support that seem relevant to network growth. Network growth exceeded all expectations. Members engaged with varying aspects of the network's virtual technologies, such as teams of professionals sharing a common interest, research teams conducting their work, and instructional webinars open to network members. Members used wikis, blogs, and discussion groups to support professional work, as well as a members' database with contact information and areas of interest. The database is accessed approximately 10 times per day. InspireNet public blog posts are accessed roughly 500 times each. At the time of writing, 21 research teams

  1. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    Science.gov (United States)

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Open source system OpenVPN in a function of Virtual Private Network

    Science.gov (United States)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  3. Virtualization of Event Sources in Wireless Sensor Networks for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Néstor Lucas Martínez

    2014-12-01

    Full Text Available Wireless Sensor Networks (WSNs are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

  4. Virtualization of Event Sources in Wireless Sensor Networks for the Internet of Things

    Science.gov (United States)

    Martínez, Néstor Lucas; Martínez, José-Fernán; Díaz, Vicente Hernández

    2014-01-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model. PMID:25470489

  5. Virtualization of event sources in wireless sensor networks for the internet of things.

    Science.gov (United States)

    Lucas Martínez, Néstor; Martínez, José-Fernán; Hernández Díaz, Vicente

    2014-12-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

  6. Mobile Virtual Network Operator Information Systems for Increased Sustainability in Utilities

    DEFF Research Database (Denmark)

    Joensen, Hallur Leivsgard; Tambo, Torben

    2011-01-01

    sales from efficiency of business processes, underlying information systems, and the ability to make the link from consumption to cost visual and transparent to consumers. The conclusion is that the energy sector should look into other sectors and learn from information systems which ease up business......, sales and buying processes are separated from physical networks and energy production. This study aims to characterise and evaluate information systems supporting the transformation of the free market-orientation of energy and provision of utilities in a cross-sectorial proposition known as Mobile...... Virtual Network Operator (MVNO). Emphasis is particularly on standardised information systems for automatically linking consumers, sellers and integration of network infrastructure actors. The method used is a feasibility study assessing business and information processes of a forthcoming utilities market...

  7. Procedure to Solve Network DEA Based on a Virtual Gap Measurement Model

    Directory of Open Access Journals (Sweden)

    Fuh-hwa Franklin Liu

    2017-01-01

    Full Text Available Network DEA models assess production systems that contain a set of network-structured subsystems. Each subsystem has input and output measures from and to the external network and has intermediate measures that link to other subsystems. Most published studies demonstrate how to employ DEA models to establish network DEA models. Neither static nor dynamic network DEA models adjust the links. This paper applies the virtual gap measurement (VGM model to construct a mixed integer program to solve dynamic network DEA problems. The mixed integer program sets the total numbers of “as-input” and “as-output” equal to the total number of links in the objective function. To obtain the best-practice efficiency, each DMU determines a set of weights for inputs, outputs, and links. The links are played either “as-input” or “as-output.” Input and as-input measures reduce slack, whereas output and as-output measures increase slacks to attain their target on the production frontier.

  8. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  9. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  10. Virtual shelves in a digital library: a framework for access to networked information sources.

    Science.gov (United States)

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  11. VPN (Virtual Private Network) Performance Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Calixto; Goncalves, Joao G.M.; Sequeira, Vitor [Joint Research Centre, Ispra (Italy). Inst. for the Protection and Security of the Citizen; Vandaele, Roland; Meylemans, Paul [European Commission, DG-TREN (Luxembourg)

    2003-05-01

    Virtual Private Networks (VPN) is an important technology allowing for secure communications through insecure transmission media (i.e., Internet) by adding authentication and encryption to the existing protocols. This paper describes some VPN performance indicators measured over international communication links. An ISDN based VPN link was established between the Joint Research Centre, Ispra site, Italy, and EURATOM Safeguards in Luxembourg. This link connected two EURATOM Safeguards FAST surveillance stations, and used different vendor solutions hardware (Cisco router 1720 and Nokia CC-500 Gateway). To authenticate and secure this international link, we have used several methods at the different levels of the seven-layered ISO network protocol stack (i.e., Callback feature, CHAP - Challenge Handshake Protocol - authentication protocol). The tests made involved the use of different encryption algorithms and the way session secret keys are periodically renewed, considering these elements influence significantly the transmission throughput. Future tests will include the use of a wide variety of wireless media transmission and terminal equipment technologies, in particular PDAs (Personal Digital Assistants) and Notebook PCs. These tests aim at characterising the functionality of VPNs whenever field inspectors wish to contact headquarters to access information from a central archive database or transmit local measurements or documents. These technologies cover wireless transmission needs at different geographical scales: roombased level Bluetooth, floor or building level Wi-Fi and region or country level GPRS.

  12. Multi-Hop Link Capacity of Multi-Route Multi-Hop MRC Diversity for a Virtual Cellular Network

    Science.gov (United States)

    Daou, Imane; Kudoh, Eisuke; Adachi, Fumiyuki

    In virtual cellular network (VCN), proposed for high-speed mobile communications, the signal transmitted from a mobile terminal is received by some wireless ports distributed in each virtual cell and relayed to the central port that acts as a gateway to the core network. In this paper, we apply the multi-route MHMRC diversity in order to decrease the transmit power and increase the multi-hop link capacity. The transmit power, the interference power and the link capacity are evaluated for DS-CDMA multi-hop VCN by computer simulation. The multi-route MHMRC diversity can be applied to not only DS-CDMA but also other access schemes (i. e. MC-CDMA, OFDM, etc.).

  13. Virtual Worlds for Virtual Organizing

    Science.gov (United States)

    Rhoten, Diana; Lutters, Wayne

    The members and resources of a virtual organization are dispersed across time and space, yet they function as a coherent entity through the use of technologies, networks, and alliances. As virtual organizations proliferate and become increasingly important in society, many may exploit the technical architecture s of virtual worlds, which are the confluence of computer-mediated communication, telepresence, and virtual reality originally created for gaming. A brief socio-technical history describes their early origins and the waves of progress followed by stasis that brought us to the current period of renewed enthusiasm. Examination of contemporary examples demonstrates how three genres of virtual worlds have enabled new arenas for virtual organizing: developer-defined closed worlds, user-modifiable quasi-open worlds, and user-generated open worlds. Among expected future trends are an increase in collaboration born virtually rather than imported from existing organizations, a tension between high-fidelity recreations of the physical world and hyper-stylized imaginations of fantasy worlds, and the growth of specialized worlds optimized for particular sectors, companies, or cultures.

  14. Quality of Service Control Based on Virtual Private Network Services in a Wide Area Gigabit Ethernet Optical Test Bed

    Science.gov (United States)

    Rea, Luca; Pompei, Sergio; Valenti, Alessandro; Matera, Francesco; Zema, Cristiano; Settembre, Marina

    We report an experimental investigation about the Virtual Private LAN Service technique to guarantee the quality of service in the metro/core network and also in the presence of access bandwidth bottleneck. We also show how the virtual private network can be set up for answering to a user request in a very fast way. The tests were performed in a GMPLS test bed with GbE core routers linked with long (tens of kilometers) GbE G.652 fiber links.

  15. The Design and Analysis of Virtual Network Configuration for a Wireless Mobile ATM Network

    Science.gov (United States)

    Bush, Stephen F.

    1999-05-01

    This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dynamic and continuous process. Factors such as load, distance, capacity and topology are all constantly changing in a mobile environment. The VNC algorithm anticipates configuration changes and speeds the reconfiguration process by pre-computing and caching results. VNC propagates local prediction results throughout the VNC enhanced system. The Global Positioning System is an enabling technology for the use of VNC in mobile networks because it provides location information and accurate time for each node. This research has resulted in well defined structures for the encapsulation of physical processes within Logical Processes and a generic library for enhancing a system with VNC. Enhancing an existing system with VNC is straight forward assuming the existing physical processes do not have side effects. The benefit of prediction is gained at the cost of additional traffic and processing. This research includes an analysis of VNC and suggestions for optimization of the VNC algorithm and its parameters.

  16. Orthotropic conductivity reconstruction with virtual-resistive network and Faraday's law

    KAUST Repository

    Lee, Min-Gi

    2015-06-01

    We obtain the existence and the uniqueness at the same time in the reconstruction of orthotropic conductivity in two-space dimensions by using two sets of internal current densities and boundary conductivity. The curl-free equation of Faraday\\'s law is taken instead of the elliptic equation in a divergence form that is typically used in electrical impedance tomography. A reconstruction method based on layered bricks-type virtual-resistive network is developed to reconstruct orthotropic conductivity with up to 40% multiplicative noise.

  17. VIRTUAL WORLD MARKETING: THE IMPORTANCE OF BEING ON SOCIAL NETWORKS

    OpenAIRE

    EVERTON DAMIÃO TAVANO SANTOS; JOÃO PAULO DA SILVA GOMES; CARLOS EDUARDO CICCONE

    2012-01-01

    Increasingly present at peolpe´s daily life and seeking to satisfy their wishes, marketing is searching to adapt itself to consumer´s real necessities as well as to the environments currently used by them. With the growing use of technology and internet access, marketing ceases to act only on physical media such as magazines, newspapers and pamphlets to go further, searching for a new environment where customers go like social networking in virtual world where the dissemination of informa...

  18. Virtual water trade and country vulnerability: A network perspective

    Science.gov (United States)

    Sartori, Martina; Schiavo, Stefano

    2015-04-01

    This work investigates the relationship between countries' participation in virtual water trade and their vulnerability to external shocks from a network perspective. In particular, we investigate whether (i) possible sources of local national crises may interact with the system, propagating through the network and affecting the other countries involved; (ii) the topological characteristics of the international agricultural trade network, translated into virtual water-equivalent flows, may favor countries' vulnerability to external crises. Our work contributes to the debate on the potential merits and risks associated with openness to trade in agricultural and food products. On the one hand, trade helps to ensure that even countries with limited water (and other relevant) resources have access to sufficient food and contribute to the global saving of water. On the other hand, there are fears that openness may increase the vulnerability to external shocks and thus make countries worse off. Here we abstract from political considerations about food sovereignty and independence from imports and focus instead on investigating whether the increased participation in global trade that the world has witnessed in the last 30 years has made the system more susceptible to large shocks. Our analysis reveals that: (i) the probability of larger supply shocks has not increased over time; (ii) the topological characteristics of the VW network are not such as to favor the systemic risk associated with shock propagation; and (iii) higher-order interconnections may reveal further important information about the structure of a network. Regarding the first result, fluctuations in output volumes, among the sources of shock analyzed here, are more likely to generate some instability. The first implication is that, on one side, past national or regional economic crises were not necessarily brought about or strengthened by global trade. The second, more remarkable, implication is that, on

  19. Implementing Virtual Private Networking for Enabling Lower Cost, More Secure Wide Area Communications at Sandia National Laboratories; TOPICAL

    International Nuclear Information System (INIS)

    MILLER, MARC M.; YONEK JR., GEORGE A.

    2001-01-01

    Virtual Private Networking is a new communications technology that promises lower cost, more secure wide area communications by leveraging public networks such as the Internet. Sandia National Laboratories has embraced the technology for interconnecting remote sites to Sandia's corporate network, and for enabling remote access users for both dial-up and broadband access

  20. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  1. Emulación de elementos de networking interactuando con máquinas virtuales

    OpenAIRE

    Binker, Carlos; Pére, Alejandro; Buranits, Guillermo; Zurdo, Eliseo

    2016-01-01

    En este trabajo se pretende mostrar la emulación de elementos de networking tales como switches y routers interactuando con máquinas virtuales que emplean sistemas operativos diversos como ser Windows, Mac OS X, Linux en diferentes distribuciones, etc. A tal efecto se emplea una plataforma de software libre denominada GNS3 (Graphical Simulator Network 3). Después de hacer un análisis más pormenorizado de dicha plataforma con sus programas asociados se mostrará un ejemplo de laboratorio en don...

  2. Second Line of Defense Virtual Private Network Guidance for Deployed and New CAS Systems

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Surya V.; Thronas, Aaron I.

    2010-01-01

    This paper discusses the importance of remote access via virtual private network (VPN) for the Second Line of Defense (SLD) Central Alarm System (CAS) sites, the requirements for maintaining secure channels while using VPN and implementation requirements for current and future sites.

  3. Dynamic photonic lightpaths in the StarPlane network

    NARCIS (Netherlands)

    Grosso, P.; Marchal, D.; Maassen, J.; Bernier, E.; Xu, L.; de Laat, C.

    2009-01-01

    The StarPlane project enables users to dynamically control network photonic paths. Applications running on the Distributed ASCI Supercomputer (DAS-3) can manipulate wavelengths in the Dutch research and education network SURFnet6. The goal is to achieve fast switching times so that when the

  4. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  5. U-Net/SLE: A Java-Based User-Customizable Virtual Network Interface

    Directory of Open Access Journals (Sweden)

    Matt Welsh

    1999-01-01

    Full Text Available We describe U‐Net/SLE (Safe Language Extensions, a user‐level network interface architecture which enables per‐application customization of communication semantics through downloading of user extension applets, implemented as Java classfiles, to the network interface. This architecture permits applications to safely specify code to be executed within the NI on message transmission and reception. By leveraging the existing U‐Net model, applications may implement protocol code at the user level, within the NI, or using some combination of the two. Our current implementation, using the Myricom Myrinet interface and a small Java Virtual Machine subset, allows host communication overhead to be reduced and improves the overlap of communication and computation during protocol processing.

  6. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  7. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  9. Virtual Global Accelerator Network (VGAN)(LCC-0083)

    International Nuclear Information System (INIS)

    Larsen, R

    2003-01-01

    The concept of a Global Accelerator Network (GAN) has been proposed by key members of ICFA as a cornerstone of a future International Linear Collider (LC). GAN would provide a tool for the participants of an international collaboration to participate in the actual running of the machine from different parts of the world. Some technical experts view the concept as technologically trivial, and instead point out the significant sociological, organizational and administrative problems that must be surmounted in creating a truly workable system. This note proposes that many real issues can be explored by building a simulator (VGAN) consisting of a virtual accelerator model, a global controls model, and a functioning human organizational model, a tool that would explore and resolve many real problems of GAN and the LC enterprise during the LC preliminary design and testing phase

  10. PERANCANGAN VIRTUAL PRIVATE NETWORK DENGAN SERVER LINUX PADA PT. DHARMA GUNA SAKTI

    Directory of Open Access Journals (Sweden)

    Siswa Trihadi

    2008-05-01

    Full Text Available Purpose of this research is to analyze and design a network between head and branch office, andcompany mobile user, which can be used to increase performance and effectiveness of company in doingtheir business process. There were 3 main methods used in this research, which were: library study, analysis,and design method. Library study method was done by searching theoretical sources, knowledge, and otherinformation from books, articles in library, and internet pages. Analysis method was done by doing anobservation on company network, and an interview to acquire description of current business process andidentify problems which can be solved by using a network technology. Meanwhile, the design method wasdone by making a topology network diagram, and determining elements needed to design a VPN technology,then suggesting a configuration system, and testing to know whether the suggested system could run well ornot. The result is that network between the head and branch office, and the mobile user can be connectedsuccessfully using a VPN technology. In conclusion, with the connected network between the head andbranch office can create a centralization of company database, and a suggested VPN network has run well byencapsulating data packages had been sent.Keywords: network, Virtual Private Network (VPN, library study, analysis, design

  11. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  12. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  13. Virtualization in control system environment

    International Nuclear Information System (INIS)

    Shen, L.R.; Liu, D.K.; Wan, T.M.

    2012-01-01

    In large scale distributed control system, there are lots of common service composed an environment for the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for control system environment including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the EPICS based control system environment that was built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also have applied virtualization to application servers such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we test the high availability and VMotion for EPICS asynchronous IOC successful under the different VLAN configuration of the current SSRF control system network. (authors)

  14. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  15. The Relationship Between the Use of Virtual Social Networks with Academic Achievement and Students' Confidence in Interpersonal Relations at Birjand University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    aliakbar ajam

    2017-06-01

    Full Text Available Background and Objective: This study aimed to investigate the relationship between the use of mobile based virtual social networks with academic achievement and trust in interpersonal relations of university students Of Medical Sciences was conducted. Materials and Methods: This study was descriptive correlational. The study population included college of Public Health students and students of medicine at Birjand University of Medical Sciences. Based on purposive sampling method, 150 students were selected. For data collection Scale of trust in interpersonal relations of Rempel & Holmes was used. The researchers made use of social networks and academic achievement. Data were analyzed by SPSS software version 20. Result: There was a significant negative relationship between the time allotted to the network and the number of virtual memberships in social groups and academic achievement of students(P <0.01. Academic achievement of students who used virtual social networks for scientific purposes was higher than those who used it for non-scientific purposes. There was a significant negative correlation between the time allocated to social networks and factors such as capability of trust, predictability and loyalty (P <0.05. Conclusion: It is recommended that workshops and training courses be held for practical learning of virtual networks.

  16. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2014-01-01

    Full Text Available In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM migration detection algorithm based on the cellular neural networks (CNNs, is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation allowing the VM migration detection to be performed better.

  17. Global detection of live virtual machine migration based on cellular neural networks.

    Science.gov (United States)

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  18. Making Choices in the Virtual World: The New Model at United Technologies Information Network.

    Science.gov (United States)

    Gulliford, Bradley

    1998-01-01

    Describes changes in services of the United Technologies Corporation Information Network from a traditional library system to a virtual system of World Wide Web sites, a document-delivery unit, telephone and e-mail reference, and desktop technical support to provide remote access. Staff time, security, and licensing issues are addressed.…

  19. Holding-time-aware asymmetric spectrum allocation in virtual optical networks

    Science.gov (United States)

    Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.

  20. Dynamically allocated virtual clustering management system

    Science.gov (United States)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  1. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  2. Control, data acquisition, data analysis and remote participation in LHD

    International Nuclear Information System (INIS)

    Nagayama, Y.; Emoto, M.; Nakanishi, H.; Sudo, S.; Imazu, S.; Inagaki, S.; Iwata, C.; Kojima, M.; Nonomura, M.; Ohsuna, M.; Tsuda, K.; Yoshida, M.; Chikaraishi, H.; Funaba, H.; Horiuchi, R.; Ishiguro, S.; Ito, Y.; Kubo, S.; Mase, A.; Mito, T.

    2008-01-01

    This paper presents the control, data acquisition, data analysis and remote participation facilities of the Large Helical Device (LHD), which is designed to confine the plasma in steady state. In LHD the plasma duration exceeds 3000 s by controlling the plasma position, the density and the ICRF heating. The 'LABCOM' data acquisition system takes both the short-pulse and the steady-state data. A two-layer Mass Storage System with RAIDs and Blu-ray Disk jukeboxes in a storage area network has been developed to increase capacity of storage. The steady-state data can be monitored with a Web browser in real time. A high-level data analysis system with Web interfaces is being developed in order to provide easier usage of LHD data and large FORTRAN codes in a supercomputer. A virtual laboratory system for the Japanese fusion community has been developed with Multi-protocol Label Switching Virtual Private Network Technology. Collaborators at remote sites can join the LHD experiment or use the NIFS supercomputer system as if they were working in the LHD control room

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  4. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  5. How the human brain goes virtual: distinct cortical regions of the person-processing network are involved in self-identification with virtual agents.

    Science.gov (United States)

    Ganesh, Shanti; van Schie, Hein T; de Lange, Floris P; Thompson, Evan; Wigboldus, Daniël H J

    2012-07-01

    Millions of people worldwide engage in online role-playing with their avatar, a virtual agent that represents the self. Previous behavioral studies have indicated that many gamers identify more strongly with their avatar than with their biological self. Through their avatar, gamers develop social networks and learn new social-cognitive skills. The cognitive neurosciences have yet to identify the neural processes that underlie self-identification with these virtual agents. We applied functional neuroimaging to 22 long-term online gamers and 21 nongaming controls, while they rated personality traits of self, avatar, and familiar others. Strikingly, neuroimaging data revealed greater avatar-referential cortical activity in the left inferior parietal lobe, a region associated with self-identification from a third-person perspective. The magnitude of this brain activity correlated positively with the propensity to incorporate external body enhancements into one's bodily identity. Avatar-referencing furthermore recruited greater activity in the rostral anterior cingulate gyrus, suggesting relatively greater emotional self-involvement with one's avatar. Post-scanning behavioral data revealed superior recognition memory for avatar relative to others. Interestingly, memory for avatar positively covaried with play duration. These findings significantly advance our knowledge about the brain's plasticity to self-identify with virtual agents and the human cognitive-affective potential to live and learn in virtual worlds.

  6. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  7. Physicists set new record for network data transfer

    CERN Multimedia

    2007-01-01

    "An international team of physicists, computer scientists, and network engineers joined forces to set new records for sustained data transfer between storage systems durint the SuperComputing 2006 (SC06) Bandwidth Challenge (BWC). (3 pages)

  8. Virtual Enterprises, Mobile Markets and Volatile Customers

    NARCIS (Netherlands)

    F.P.H. Jaspers (Ferdinand); W. Hulsink (Wim); J.J.M. Theeuwes (Myrte)

    2005-01-01

    textabstractRecently, several new mobile virtual network operators (MVNOs) have entered the European mobile telecommunications markets. These service providers do not own a mobile network, but instead they buy capacity from other companies. Because these virtual operators do not possess an

  9. VTAC: virtual terrain assisted impact assessment for cyber attacks

    Science.gov (United States)

    Argauer, Brian J.; Yang, Shanchieh J.

    2008-03-01

    Overwhelming intrusion alerts have made timely response to network security breaches a difficult task. Correlating alerts to produce a higher level view of intrusion state of a network, thus, becomes an essential element in network defense. This work proposes to analyze correlated or grouped alerts and determine their 'impact' to services and users of the network. A network is modeled as 'virtual terrain' where cyber attacks maneuver. Overlaying correlated attack tracks on virtual terrain exhibits the vulnerabilities exploited by each track and the relationships between them and different network entities. The proposed impact assessment algorithm utilizes the graph-based virtual terrain model and combines assessments of damages caused by the attacks. The combined impact scores allow to identify severely damaged network services and affected users. Several scenarios are examined to demonstrate the uses of the proposed Virtual Terrain Assisted Impact Assessment for Cyber Attacks (VTAC).

  10. State Virtual Libraries

    Science.gov (United States)

    Pappas, Marjorie L.

    2003-01-01

    Virtual library? Electronic library? Digital library? Online information network? These all apply to the growing number of Web-based resource collections managed by consortiums of state library entities. Some, like "INFOhio" and "KYVL" ("Kentucky Virtual Library"), have been available for a few years, but others are just starting. Searching for…

  11. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  12. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  13. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  14. Poverty-Related Diseases College: a virtual African-European network to build research capacity.

    Science.gov (United States)

    Dorlo, Thomas P C; Fernández, Carmen; Troye-Blomberg, Marita; de Vries, Peter J; Boraschi, Diana; Mbacham, Wilfred F

    2016-01-01

    The Poverty-Related Diseases College was a virtual African-European college and network that connected young African and European biomedical scientists working on poverty-related diseases. The aim of the Poverty-Related Diseases College was to build sustainable scientific capacity and international networks in poverty-related biomedical research in the context of the development of Africa. The Poverty-Related Diseases College consisted of three elective and mandatory training modules followed by a reality check in Africa and a science exchange in either Europe or the USA. In this analysis paper, we present our experience and evaluation, discuss the strengths and encountered weaknesses of the programme, and provide recommendations to policymakers and funders.

  15. Foodsheds in Virtual Water Flow Networks: A Spectral Graph Theory Approach

    Directory of Open Access Journals (Sweden)

    Nina Kshetry

    2017-06-01

    Full Text Available A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food–water–energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e., water that is virtually embedded in food. In particular, we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.

  16. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  17. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  18. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  19. Cyber entertainment system using an immersive networked virtual environment

    Science.gov (United States)

    Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi

    2002-05-01

    Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.

  20. Developing a Hybrid Virtualization Platform Design for Cyber Warfare Training and Education

    Science.gov (United States)

    2010-06-01

    25  2.7.2.  Virtual Distributed Ethernet ( VDE ) ...................................................... 26  2.7.3...ability to work with the network independent of the actual underlying physical topology. 26 2.7.2. Virtual Distributed Ethernet ( VDE ) 2.7.2.1...Virtual Distributed Ethernet ( VDE ) is an abstraction of the networking components involved in a typical Ethernet network [18]. It allows for virtual

  1. Recent History and Geography of Virtual Water Trade

    Science.gov (United States)

    Carr, Joel A.; D’Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2013-01-01

    The global trade of goods is associated with a virtual transfer of the water required for their production. The way changes in trade affect the virtual redistribution of freshwater resources has been recently documented through the analysis of the virtual water network. It is, however, unclear how these changes are contributed by different types of products and regions of the world. Here we show how the global patterns of virtual water transport are contributed by the trade of different commodity types, including plant, animal, luxury (e.g., coffee, tea, and alcohol), and other products. Major contributors to the virtual water network exhibit different trade patterns with regard to these commodity types. The net importers rely on the supply of virtual water from a small percentage of the global population. However, discrepancies exist among the different commodity networks. While the total virtual water flux through the network has increased between 1986 and 2010, the proportions associated with the four commodity groups have remained relatively stable. However, some of the major players have shown significant changes in the virtual water imports and exports associated with those commodity groups. For instance, China has switched from being a net exporter of virtual water associated with other products (non-edible plant and animal products typically used for manufacturing) to being the largest importer, accounting for 31% of the total water virtually transported with these products. Conversely, in the case of The United states of America, the commodity proportions have remained overall unchanged throughout the study period: the virtual water exports from The United States of America are dominated by plant products, whereas the imports are comprised mainly of animal and luxury products. PMID:23457481

  2. Recent history and geography of virtual water trade.

    Science.gov (United States)

    Carr, Joel A; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2013-01-01

    The global trade of goods is associated with a virtual transfer of the water required for their production. The way changes in trade affect the virtual redistribution of freshwater resources has been recently documented through the analysis of the virtual water network. It is, however, unclear how these changes are contributed by different types of products and regions of the world. Here we show how the global patterns of virtual water transport are contributed by the trade of different commodity types, including plant, animal, luxury (e.g., coffee, tea, and alcohol), and other products. Major contributors to the virtual water network exhibit different trade patterns with regard to these commodity types. The net importers rely on the supply of virtual water from a small percentage of the global population. However, discrepancies exist among the different commodity networks. While the total virtual water flux through the network has increased between 1986 and 2010, the proportions associated with the four commodity groups have remained relatively stable. However, some of the major players have shown significant changes in the virtual water imports and exports associated with those commodity groups. For instance, China has switched from being a net exporter of virtual water associated with other products (non-edible plant and animal products typically used for manufacturing) to being the largest importer, accounting for 31% of the total water virtually transported with these products. Conversely, in the case of The United states of America, the commodity proportions have remained overall unchanged throughout the study period: the virtual water exports from The United States of America are dominated by plant products, whereas the imports are comprised mainly of animal and luxury products.

  3. Networking and virtuality in entrepreneurial organisations in the age of countries without borders

    OpenAIRE

    Duobienė, Jurga; Duoba, Kęstutis; Kumpikaitė, Vilmantė; Žičkutė, Ineta

    2015-01-01

    Entrepreneurial organisations continuously search for innovations and innovative ways of doing business that provide a competitive advantage in the market. In the age of countries without borders and free movement of people organisations in Eastern Europe deal with the lack of high quality labour force caused by migration that force to seek alternative ways of managing work and workplace. The paper analyses networking, virtual workplace and other characteristics of job design in entrepreneuri...

  4. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works...

  5. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  6. Students' Personal Networks in Virtual and Personal Learning Environments: A Case Study in Higher Education Using Learning Analytics Approach

    Science.gov (United States)

    Casquero, Oskar; Ovelar, Ramón; Romo, Jesús; Benito, Manuel; Alberdi, Mikel

    2016-01-01

    The main objective of this paper is to analyse the effect of the affordances of a virtual learning environment and a personal learning environment (PLE) in the configuration of the students' personal networks in a higher education context. The results are discussed in light of the adaptation of the students to the learning network made up by two…

  7. A New Energy-Efficient Data Transmission Scheme Based on DSC and Virtual MIMO for Wireless Sensor Network

    OpenAIRE

    Li, Na; Zhang, Liwen; Li, Bing

    2015-01-01

    Energy efficiency in wireless sensor network (WSN) is one of the primary performance parameters. For improving the energy efficiency of WSN, we introduce distributed source coding (DSC) and virtual multiple-input multiple-output (MIMO) into wireless sensor network and then propose a new data transmission scheme called DSC-MIMO. DSC-MIMO compresses the source data using distributed source coding before transmitting, which is different from the existing communication schemes. Data compression c...

  8. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  9. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  10. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  11. Behavioral and network origins of wealth inequality: insights from a virtual world.

    Directory of Open Access Journals (Sweden)

    Benedikt Fuchs

    Full Text Available Almost universally, wealth is not distributed uniformly within societies or economies. Even though wealth data have been collected in various forms for centuries, the origins for the observed wealth-disparity and social inequality are not yet fully understood. Especially the impact and connections of human behavior on wealth could so far not be inferred from data. Here we study wealth data from the virtual economy of the massive multiplayer online game (MMOG Pardus. This data not only contains every player's wealth at every point in time, but also all actions over a timespan of almost a decade. We find that wealth distributions in the virtual world are very similar to those in Western countries. In particular we find an approximate exponential distribution for low wealth levels and a power-law tail for high levels. The Gini index is found to be g = 0.65, which is close to the indices of many Western countries. We find that wealth-increase rates depend on the time when players entered the game. Players that entered the game early on tend to have remarkably higher wealth-increase rates than those who joined later. Studying the players' positions within their social networks, we find that the local position in the trade network is most relevant for wealth. Wealthy people have high in- and out-degrees in the trade network, relatively low nearest-neighbor degrees, and low clustering coefficients. Wealthy players have many mutual friendships and are socially well respected by others, but spend more time on business than on socializing. Wealthy players have few personal enemies, but show animosity towards players that behave as public enemies. We find that players that are not organized within social groups are significantly poorer on average. We observe that "political" status and wealth go hand in hand.

  12. Behavioral and network origins of wealth inequality: insights from a virtual world.

    Science.gov (United States)

    Fuchs, Benedikt; Thurner, Stefan

    2014-01-01

    Almost universally, wealth is not distributed uniformly within societies or economies. Even though wealth data have been collected in various forms for centuries, the origins for the observed wealth-disparity and social inequality are not yet fully understood. Especially the impact and connections of human behavior on wealth could so far not be inferred from data. Here we study wealth data from the virtual economy of the massive multiplayer online game (MMOG) Pardus. This data not only contains every player's wealth at every point in time, but also all actions over a timespan of almost a decade. We find that wealth distributions in the virtual world are very similar to those in Western countries. In particular we find an approximate exponential distribution for low wealth levels and a power-law tail for high levels. The Gini index is found to be g = 0.65, which is close to the indices of many Western countries. We find that wealth-increase rates depend on the time when players entered the game. Players that entered the game early on tend to have remarkably higher wealth-increase rates than those who joined later. Studying the players' positions within their social networks, we find that the local position in the trade network is most relevant for wealth. Wealthy people have high in- and out-degrees in the trade network, relatively low nearest-neighbor degrees, and low clustering coefficients. Wealthy players have many mutual friendships and are socially well respected by others, but spend more time on business than on socializing. Wealthy players have few personal enemies, but show animosity towards players that behave as public enemies. We find that players that are not organized within social groups are significantly poorer on average. We observe that "political" status and wealth go hand in hand.

  13. Virtualized Network Control (VNC)

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Thomas [Univ. of Southern California, Los Angeles, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-01-31

    The focus of this project was on the development of a "Network Service Plane" as an abstraction model for the control and provisioning of multi-layer networks. The primary motivation for this work were the requirements of next generation networked applications which will need to access advanced networking as a first class resource at the same level as compute and storage resources. A new class of "Intelligent Network Services" were defined in order to facilitate the integration of advanced network services into application specific workflows. This new class of network services are intended to enable real-time interaction between the application co-scheduling algorithms and the network for the purposes of workflow planning, real-time resource availability identification, scheduling, and provisioning actions.

  14. Knowledge Networking for Family Planning: The Potential for Virtual Communities of Practice to Move Forward the Global Reproductive Health Agenda

    Directory of Open Access Journals (Sweden)

    Megan O’Brien

    2010-06-01

    Full Text Available This paper highlights experience from five years of using virtual communication tools developed by the World Health Organization Department of Reproductive Health and Research (WHO/RHR and its partners in the Implementing Best Practices (IBP in Reproductive Health Initiative to help bridge the knowledge-to-practice gap among family planning and reproductive health professionals. It explores how communities of practice and virtual networks offer a unique low-cost way to convene public health practitioners around the world to share experiences. It offers examples of how communities of practice can contribute to the development and dissemination of evidence-based health information products, and explores the potential for online networking and collaboration to enhance and inform program design and management. The paper is intended to inform the reproductive health community, as well as others working in health and development, of the potential for using virtual communities of practice to work towards achieving common goals and provide some examples of their successful use.

  15. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  16. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  17. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  18. Assessing offshore emergency evacuation behavior in a virtual environment using a Bayesian Network approach

    International Nuclear Information System (INIS)

    Musharraf, Mashrura; Smith, Jennifer; Khan, Faisal; Veitch, Brian; MacKinnon, Scott

    2016-01-01

    In the performance influencing factor (PIF) hierarchy, person-based influencing factors reside in the top level along with machine-based, team-based, organization-based and situation/stressor-based factors. Though person-based PIFs like morale, motivation, and attitude (MMA) play an important role in shaping performance, it is nearly impossible to assess such PIFs directly. However, it is possible to measure behavioral indicators (e.g. compliance, use of information) that can provide insight regarding the state of the unobservable person-based PIFs. One common approach to measuring these indicators is to carry out a self-reported questionnaire survey. Significant work has been done to make such questionnaires reliable, but the potential validity problem associated with any questionnaire is that the data are subjective and thus may bear a limited relationship to reality. This paper describes the use of a virtual environment to measure behavioral indicators, which in turn can be used as proxies to assess otherwise unobservable PIFs like MMA. A Bayesian Network (BN) model is first developed to define the relationship between person-based PIFs and measurable behavioral indicators. The paper then shows how these indicators can be measured using evidence collected from a virtual environment of an offshore petroleum installation. A study that focused on emergency evacuation scenarios was done with 36 participants. The participants were first assessed using a multiple choice test. They were then assessed based on their observed performance during simulated offshore emergency evacuation conditions. A comparison of the two assessments demonstrates the potential benefits and challenges of using virtual environments to assess behavioral indicators, and thus the person-based PIFs. - Highlights: • New approach to use virtual environment as measure of behavioral indicators. • New model to study morale, motivation, and attitude. • Bayesian Network model to define the

  19. Configuration of the Virtual Laboratory for Fusion Researches in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, T.; Nagayama, Y.; Nakanishi, H.; Ishiguro, S.; Takami, S.; Tsuda, K.; Okamura, S. [National Institute for Fusion Science, National Institutes of Natural Sciences, Toki (Japan)

    2009-07-01

    SNET is a virtual laboratory system for nuclear fusion research in Japan, it has been developed since 2001 with SINET3, which is a national academic network backbone operated by National Institute of Computer sciences. Twenty one sites including major Japanese universities, JAEA and NIFS are mutually connected on SNET with the speed of 1 Gbps in 2008 fiscal year. The SNET is a closed network system based on L2 and L3 VPN and is connected to the web through the firewall at NIFS for security maintenance. Collaboration categories in SNET are as follows: the LHD remote participation; the remote use of supercomputer system; the all Japan ST (Spherical Tokamak) research program. For example, the collaborators of the first category in a remote station can control their diagnostic devices at LHD and analyze the LHD data as if they were at the LHD control room. The detail of the network policy is different from each other because each category has its own particular purpose. In October 2008, the Kyushu University and NIFS were connected by L2 VPN. The site was already connected by L3 VPN, but the data transfer rate was rather low. L2 VPN supports the bulk data transfer which is produced by QUEST, the spherical tokamak device at Kyushu University. The wide-area broadcast test began to distribute to remote stations the video which is presented at the front panel of the LHD control room. ITER activity started in 2007 and 'The ITER Remote Experimentation Centre' will be constructed at the Rokkasho village in Japan under ITER-BA agreement. SNET would be useful for distributing the data of ITER to Japanese universities and institutions. (authors)

  20. Energy Balance Routing Algorithm Based on Virtual MIMO Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jianpo Li

    2014-01-01

    Full Text Available Wireless sensor networks are usually energy limited and therefore an energy-efficient routing algorithm is desired for prolonging the network lifetime. In this paper, we propose a new energy balance routing algorithm which has the following three improvements over the conventional LEACH algorithm. Firstly, we propose a new cluster head selection scheme by taking into consideration the remaining energy and the most recent energy consumption of the nodes and the entire network. In this way, the sensor nodes with smaller remaining energy or larger energy consumption will be much less likely to be chosen as cluster heads. Secondly, according to the ratio of remaining energy to distance, cooperative nodes are selected to form virtual MIMO structures. It mitigates the uneven distribution of clusters and the unbalanced energy consumption of the whole network. Thirdly, we construct a comprehensive energy consumption model, which can reflect more realistically the practical energy consumption. Numerical simulations analyze the influences of cooperative node numbers and cluster head node numbers on the network lifetime. It is shown that the energy consumption of the proposed routing algorithm is lower than the conventional LEACH algorithm and for the simulation example the network lifetime is prolonged about 25%.

  1. Recent history and geography of virtual water trade.

    Directory of Open Access Journals (Sweden)

    Joel A Carr

    Full Text Available The global trade of goods is associated with a virtual transfer of the water required for their production. The way changes in trade affect the virtual redistribution of freshwater resources has been recently documented through the analysis of the virtual water network. It is, however, unclear how these changes are contributed by different types of products and regions of the world. Here we show how the global patterns of virtual water transport are contributed by the trade of different commodity types, including plant, animal, luxury (e.g., coffee, tea, and alcohol, and other products. Major contributors to the virtual water network exhibit different trade patterns with regard to these commodity types. The net importers rely on the supply of virtual water from a small percentage of the global population. However, discrepancies exist among the different commodity networks. While the total virtual water flux through the network has increased between 1986 and 2010, the proportions associated with the four commodity groups have remained relatively stable. However, some of the major players have shown significant changes in the virtual water imports and exports associated with those commodity groups. For instance, China has switched from being a net exporter of virtual water associated with other products (non-edible plant and animal products typically used for manufacturing to being the largest importer, accounting for 31% of the total water virtually transported with these products. Conversely, in the case of The United states of America, the commodity proportions have remained overall unchanged throughout the study period: the virtual water exports from The United States of America are dominated by plant products, whereas the imports are comprised mainly of animal and luxury products.

  2. Solar-Terrestrial and Astronomical Research Network (STAR-Network) - A Meaningful Practice of New Cyberinfrastructure on Space Science

    Science.gov (United States)

    Hu, X.; Zou, Z.

    2017-12-01

    For the next decades, comprehensive big data application environment is the dominant direction of cyberinfrastructure development on space science. To make the concept of such BIG cyberinfrastructure (e.g. Digital Space) a reality, these aspects of capability should be focused on and integrated, which includes science data system, digital space engine, big data application (tools and models) and the IT infrastructure. In the past few years, CAS Chinese Space Science Data Center (CSSDC) has made a helpful attempt in this direction. A cloud-enabled virtual research platform on space science, called Solar-Terrestrial and Astronomical Research Network (STAR-Network), has been developed to serve the full lifecycle of space science missions and research activities. It integrated a wide range of disciplinary and interdisciplinary resources, to provide science-problem-oriented data retrieval and query service, collaborative mission demonstration service, mission operation supporting service, space weather computing and Analysis service and other self-help service. This platform is supported by persistent infrastructure, including cloud storage, cloud computing, supercomputing and so on. Different variety of resource are interconnected: the science data can be displayed on the browser by visualization tools, the data analysis tools and physical models can be drived by the applicable science data, the computing results can be saved on the cloud, for example. So far, STAR-Network has served a series of space science mission in China, involving Strategic Pioneer Program on Space Science (this program has invested some space science satellite as DAMPE, HXMT, QUESS, and more satellite will be launched around 2020) and Meridian Space Weather Monitor Project. Scientists have obtained some new findings by using the science data from these missions with STAR-Network's contribution. We are confident that STAR-Network is an exciting practice of new cyberinfrastructure architecture on

  3. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  4. Managing Distributed Innovation Processes in Virtual Organizations by Applying the Collaborative Network Relationship Analysis

    Science.gov (United States)

    Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter

    Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.

  5. Dynamic routing and spectrum assignment based on multilayer virtual topology and ant colony optimization in elastic software-defined optical networks

    Science.gov (United States)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun

    2017-07-01

    Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.

  6. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  7. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  8. Understanding interactions in virtual HIV communities: a social network analysis approach.

    Science.gov (United States)

    Shi, Jingyuan; Wang, Xiaohui; Peng, Tai-Quan; Chen, Liang

    2017-02-01

    This study investigated the driving mechanism of building interaction ties among the people living with HIV/AIDS in one of the largest virtual HIV communities in China using social network analysis. Specifically, we explained the probability of forming interaction ties with homophily and popularity characteristics. The exponential random graph modeling results showed that members in this community tend to form homophilous ties in terms of shared location and interests. Moreover, we found a tendency away from popularity effect. This suggests that in this community, resources and information were not disproportionally received by a few of members, which could be beneficial to the overall community.

  9. Traffic Command Gesture Recognition for Virtual Urban Scenes Based on a Spatiotemporal Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    Chunyong Ma

    2018-01-01

    Full Text Available Intelligent recognition of traffic police command gestures increases authenticity and interactivity in virtual urban scenes. To actualize real-time traffic gesture recognition, a novel spatiotemporal convolution neural network (ST-CNN model is presented. We utilized Kinect 2.0 to construct a traffic police command gesture skeleton (TPCGS dataset collected from 10 volunteers. Subsequently, convolution operations on the locational change of each skeletal point were performed to extract temporal features, analyze the relative positions of skeletal points, and extract spatial features. After temporal and spatial features based on the three-dimensional positional information of traffic police skeleton points were extracted, the ST-CNN model classified positional information into eight types of Chinese traffic police gestures. The test accuracy of the ST-CNN model was 96.67%. In addition, a virtual urban traffic scene in which real-time command tests were carried out was set up, and a real-time test accuracy rate of 93.0% was achieved. The proposed ST-CNN model ensured a high level of accuracy and robustness. The ST-CNN model recognized traffic command gestures, and such recognition was found to control vehicles in virtual traffic environments, which enriches the interactive mode of the virtual city scene. Traffic command gesture recognition contributes to smart city construction.

  10. Performance evaluation of multi-stratum resources optimization with network functions virtualization for cloud-based radio over optical fiber networks.

    Science.gov (United States)

    Yang, Hui; He, Yongqi; Zhang, Jie; Ji, Yuefeng; Bai, Wei; Lee, Young

    2016-04-18

    Cloud radio access network (C-RAN) has become a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing using cloud BBUs. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate the services in optical networks. In view of this, this study extends to consider the multiple dimensional resources optimization of radio, optical and BBU processing in 5G age. We propose a novel multi-stratum resources optimization (MSRO) architecture with network functions virtualization for cloud-based radio over optical fiber networks (C-RoFN) using software defined control. A global evaluation scheme (GES) for MSRO in C-RoFN is introduced based on the proposed architecture. The MSRO can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical and BBU resources effectively to maximize radio coverage. The efficiency and feasibility of the proposed architecture are experimentally demonstrated on OpenFlow-based enhanced SDN testbed. The performance of GES under heavy traffic load scenario is also quantitatively evaluated based on MSRO architecture in terms of resource occupation rate and path provisioning latency, compared with other provisioning scheme.

  11. Links between real and virtual networks: a comparative study of online communities in Japan and Korea.

    Science.gov (United States)

    Ishii, Kenichi; Ogasahara, Morihiro

    2007-04-01

    The present study explores how online communities affect real-world personal relations based on a cross-cultural survey conducted in Japan and Korea. Findings indicate that the gratifications of online communities moderate the effects of online communities on social participation. Online communities are categorized into a real-group-based community and a virtual-network-based community. The membership of real-group-based online community is positively correlated with social bonding gratification and negatively correlated with information- seeking gratification. Japanese users prefer more virtual-network-based online communities, while their Korean counterparts prefer real-group-based online communities. Korean users are more active in online communities and seek a higher level of socializing gratifications, such as social bonding and making new friends, when compared with their Japanese counterparts. These results indicate that in Korea, personal relations via the online community are closely associated with the real-world personal relations, but this is not the case in Japan. This study suggests that the effects of the Internet are culture-specific and that the online community can serve a different function in different cultural environments.

  12. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  13. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  14. Modeling the future evolution of the virtual water trade network: A combination of network and gravity models

    Science.gov (United States)

    Sartori, Martina; Schiavo, Stefano; Fracasso, Andrea; Riccaboni, Massimo

    2017-12-01

    The paper investigates how the topological features of the virtual water (VW) network and the size of the associated VW flows are likely to change over time, under different socio-economic and climate scenarios. We combine two alternative models of network formation -a stochastic and a fitness model, used to describe the structure of VW flows- with a gravity model of trade to predict the intensity of each bilateral flow. This combined approach is superior to existing methodologies in its ability to replicate the observed features of VW trade. The insights from the models are used to forecast future VW flows in 2020 and 2050, under different climatic scenarios, and compare them with future water availability. Results suggest that the current trend of VW exports is not sustainable for all countries. Moreover, our approach highlights that some VW importers might be exposed to "imported water stress" as they rely heavily on imports from countries whose water use is unsustainable.

  15. Class network routing

    Science.gov (United States)

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  16. Use of the Remote Access Virtual Environment Network (RAVEN) for coordinated IVA-EVA astronaut training and evaluation.

    Science.gov (United States)

    Cater, J P; Huffman, S D

    1995-01-01

    This paper presents a unique virtual reality training and assessment tool developed under a NASA grant, "Research in Human Factors Aspects of Enhanced Virtual Environments for Extravehicular Activity (EVA) Training and Simulation." The Remote Access Virtual Environment Network (RAVEN) was created to train and evaluate the verbal, mental and physical coordination required between the intravehicular (IVA) astronaut operating the Remote Manipulator System (RMS) arm and the EVA astronaut standing in foot restraints on the end of the RMS. The RAVEN system currently allows the EVA astronaut to approach the Hubble Space Telescope (HST) under control of the IVA astronaut and grasp, remove, and replace the Wide Field Planetary Camera drawer from its location in the HST. Two viewpoints, one stereoscopic and one monoscopic, were created all linked by Ethernet, that provided the two trainees with the appropriate training environments.

  17. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  18. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  19. Making Wireless Networks Secure for NASA Mission Critical Applications Using Virtual Private Network (VPN) Technology

    Science.gov (United States)

    Nichols, Kelvin F.; Best, Susan; Schneider, Larry

    2004-01-01

    With so many security issues involved with wireless networks, the technology has not been fully utilized in the area of mission critical applications. These applications would include the areas of telemetry, commanding, voice and video. Wireless networking would allow payload operators the mobility to take computers outside of the control room to their off ices and anywhere else in the facility that the wireless network was extended. But the risk is too great of having someone sit just inside of your wireless network coverage and intercept enough of your network traffic to steal proprietary data from a payload experiment or worse yet hack back into your system and do even greater harm by issuing harmful commands. Wired Equivalent Privacy (WEP) is improving but has a ways to go before it can be trusted to protect mission critical data. Today s hackers are becoming more aggressive and innovative, and in order to take advantage of the benefits that wireless networking offer, appropriate security measures need to be in place that will thwart hackers. The Virtual Private Network (VPN) offers a solution to the security problems that have kept wireless networks from being used for mission critical applications. VPN provides a level of encryption that will ensure that data is protected while it is being transmitted over a wireless local area network (LAN). The VPN allows a user to authenticate to the site that the user needs to access. Once this authentication has taken place the network traffic between that site and the user is encapsulated in VPN packets with the Triple Data Encryption Standard (3DES). 3DES is an encryption standard that uses a single secret key to encrypt and decrypt data. The length of the encryption key is 168 bits as opposed to its predecessor DES that has a 56-bit encryption key. Even though 3DES is the common encryption standard for today, the Advance Encryption Standard (AES), which provides even better encryption at a lower cycle cost is growing

  20. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  1. Material Matters for Learning in Virtual Networks: A Case Study of a Professional Learning Programme Hosted in a Google+ Online Community

    Science.gov (United States)

    Ackland, Aileen; Swinney, Ann

    2015-01-01

    In this paper, we draw on Actor-Network Theories (ANT) to explore how material components functioned to create gateways and barriers to a virtual learning network in the context of a professional development module in higher education. Students were practitioners engaged in family learning in different professional roles and contexts. The data…

  2. Heat recovery networks synthesis of large-scale industrial sites: Heat load distribution problem with virtual process subsystems

    International Nuclear Information System (INIS)

    Pouransari, Nasibeh; Maréchal, Francois

    2015-01-01

    Highlights: • Synthesizing industrial size heat recovery network with match reduction approach. • Targeting TSI with minimum exchange between process subsystems. • Generating a feasible close-to-optimum network. • Reducing tremendously the HLD computational time and complexity. • Generating realistic network with respect to the plant layout. - Abstract: This paper presents a targeting strategy to design a heat recovery network for an industrial plant by dividing the system into subsystems while considering the heat transfer opportunities between them. The methodology is based on a sequential approach. The heat recovery opportunity between process units and the optimal flow rates of utilities are first identified using a Mixed Integer Linear Programming (MILP) model. The site is then divided into a number of subsystems where the overall interaction is resumed by a pair of virtual hot and cold stream per subsystem which is reconstructed by solving the heat cascade inside each subsystem. The Heat Load Distribution (HLD) problem is then solved between those packed subsystems in a sequential procedure where each time one of the subsystems is unpacked by switching from the virtual stream pair back into the original ones. The main advantages are to minimize the number of connections between process subsystems, to alleviate the computational complexity of the HLD problem and to generate a feasible network which is compatible with the minimum energy consumption objective. The application of the proposed methodology is illustrated through a number of case studies, discussed and compared with the relevant results from the literature

  3. The photoelectric effect and study of the diffraction of light: Two new experiments in UNILabs virtual and remote laboratories network

    International Nuclear Information System (INIS)

    Sánchez, Juan Pedro; Carreras, Carmen; Yuste, Manuel; Dormido, Sebastián; Sáenz, Jacobo; De la Torre, Luis; Rubén, Heradio

    2015-01-01

    This work describes two experiments: 'study of the diffraction of light: Fraunhofer approximation' and 'the photoelectric effect'. Both of them count with a virtual, simulated, version of the experiment as well as with a real one which can be operated remotely. The two previous virtual and remote labs (built using Easy Java(script) Simulations) are integrated in UNILabs, a network of online interactive laboratories based on the free Learning Management System Moodle. In this web environment, students can find not only the virtual and remote labs but also manuals with related theory, the user interface description for each application, and so on.

  4. Dynamic Construction Scheme for Virtualization Security Service in Software-Defined Networks.

    Science.gov (United States)

    Lin, Zhaowen; Tao, Dan; Wang, Zhenji

    2017-04-21

    For a Software Defined Network (SDN), security is an important factor affecting its large-scale deployment. The existing security solutions for SDN mainly focus on the controller itself, which has to handle all the security protection tasks by using the programmability of the network. This will undoubtedly involve a heavy burden for the controller. More devastatingly, once the controller itself is attacked, the entire network will be paralyzed. Motivated by this, this paper proposes a novel security protection architecture for SDN. We design a security service orchestration center in the control plane of SDN, and this center physically decouples from the SDN controller and constructs SDN security services. We adopt virtualization technology to construct a security meta-function library, and propose a dynamic security service composition construction algorithm based on web service composition technology. The rule-combining method is used to combine security meta-functions to construct security services which meet the requirements of users. Moreover, the RETE algorithm is introduced to improve the efficiency of the rule-combining method. We evaluate our solutions in a realistic scenario based on OpenStack. Substantial experimental results demonstrate the effectiveness of our solutions that contribute to achieve the effective security protection with a small burden of the SDN controller.

  5. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Science.gov (United States)

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  6. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Directory of Open Access Journals (Sweden)

    Suleiman Zubair

    2014-05-01

    Full Text Available The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.

  7. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases

    Science.gov (United States)

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-01-01

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907

  8. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases.

    Science.gov (United States)

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-04-15

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.

  9. All-optical virtual private network and ONUs communication in optical OFDM-based PON system.

    Science.gov (United States)

    Zhang, Chongfu; Huang, Jian; Chen, Chen; Qiu, Kun

    2011-11-21

    We propose and demonstrate a novel scheme, which enables all-optical virtual private network (VPN) and all-optical optical network units (ONUs) inter-communications in optical orthogonal frequency-division multiplexing-based passive optical network (OFDM-PON) system using the subcarrier bands allocation for the first time (to our knowledge). We consider the intra-VPN and inter-VPN communications which correspond to two different cases: VPN communication among ONUs in one group and in different groups. The proposed scheme can provide the enhanced security and a more flexible configuration for VPN users compared to the VPN in WDM-PON or TDM-PON systems. The all-optical VPN and inter-ONU communications at 10-Gbit/s with 16 quadrature amplitude modulation (16 QAM) for the proposed optical OFDM-PON system are demonstrated. These results verify that the proposed scheme is feasible. © 2011 Optical Society of America

  10. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  11. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  12. Networking of Bibliographical Information: Lessons learned for the Virtual Observatory

    Science.gov (United States)

    Genova, Françoise; Egret, Daniel

    Networking of bibliographic information is particularly remarkable in astronomy. On-line journals, the ADS bibliographic database, SIMBAD and NED are everyday tools for research, and provide easy navigation from one resource to another. Tables are published on line, in close collaboration with data centers. Recent new developments include the links between observatory archives and the ADS, as well as the large scale prototyping of object links between Astronomy and Astrophysics and SIMBAD, following those implemented a few years ago with New Astronomy and the International Bulletin of Variable stars . This networking has been made possible by close collaboration between the ADS, data centers such as the CDS and NED, and the journals, and this partnership being now extended to observatory archives. Simple, de facto exchange standards, like the bibcode to refer to a published paper, have been the key for building links and exchanging data. This partnership, in which practitioners from different disciplines agree to link their resources and to work together to define useful and usable standards, has produced a revolution in scientists' practice. It is an excellent model for the Virtual Observatory projects.

  13. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  14. Material matters for learning in virtual networks: a case study of a professional learning programme hosted in a Google+ online community

    Directory of Open Access Journals (Sweden)

    Aileen Ackland

    2015-08-01

    Full Text Available In this paper, we draw on Actor–Network Theories (ANT to explore how material components functioned to create gateways and barriers to a virtual learning network in the context of a professional development module in higher education. Students were practitioners engaged in family learning in different professional roles and contexts. The data comprised postings in the Google+ community, email correspondence, meeting notes, feedback submitted at the final workshop and post-module evaluation forms. Our analysis revealed a complex set of interactions, and suggests multiple ways human actors story their encounters with non-human components and the effects these have on the learning experience. The aim of this paper is to contribute to a more holistic understanding of the components and dynamics of social learning networks in the virtual world and consider the implications for the design of online learning for continuous professional development (CPD.

  15. Virtual Factory Testbed

    Data.gov (United States)

    Federal Laboratory Consortium — The Virtual Factory Testbed (VFT) is comprised of three physical facilities linked by a standalone network (VFNet). The three facilities are the Smart and Wireless...

  16. Leading a Virtual Intercultural Team. Implications for Virtual Team Leaders

    OpenAIRE

    Chutnik, Monika; Grzesik, Katarzyna

    2009-01-01

    Increasing number of companies operate in the setup of teams whose members are geographically scattered and have different cultural origins. They work through access to the same digital network and communicate by means of modern technology. Sometimes they are located in different time zones and have never met each other face to face. This is the age of a virtual team leader. Virtual leadership in intercultural groups requires special skills from leaders. Many of these reflect leadership s...

  17. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  18. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  19. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  20. A Virtual Private Local PCN Ring Network Based on ATM VP Cross—Connection

    Institute of Scientific and Technical Information of China (English)

    LinBin; MaYingjun; 等

    1995-01-01

    Avirtual private local PCNring network (VPLPR)is proposed .VPLPR is a virtual logic ring seuved for digital cordless telephone system and it works on ATM VP cross-connection mechanism.Full-distributed data bases are organized for visitor location registers(VLR)and home location register(HLR).The signaling protocols are compatible upward to B-ISDN. The architecture and some of the main characteristics of VPLPR are given.How to configure the ATM VP cross-connection ring is described.And then a protocol conversion between STM frames and ATMcells in base station controller(BSC)is presented.

  1. Virtual Community, social network and media environment of Canary Isands regional digital newspapers

    Directory of Open Access Journals (Sweden)

    Dr. Francisco Manuel Mateos Rodríguez

    2008-01-01

    Full Text Available The impact of the new communication and information technologies has favoured the creation of multiple local newspaper websites in the Canary Islands, thus making the regional press emerge as an alternative on the rise. This tendency affects significantly both traditional and new editions of the different regional and local newspapers from the Canaries and motivates a different distribution, positioning and development within the local media environment in which these media share a novel dimension of communication with a specific virtual community and social network within the World Wide Web.

  2. Virtual MIMO Beamforming and Device Pairing Enabled by Device-to-Device Communications for Multidevice Networks

    Directory of Open Access Journals (Sweden)

    Yeonjin Jeong

    2017-01-01

    Full Text Available We consider a multidevice network with asymmetric antenna configurations which supports not only communications between an access point and devices but also device-to-device (D2D communications for the Internet of things. For the network, we propose the transmit and receive beamforming with the channel state information (CSI for virtual multiple-input multiple-output (MIMO enabled by D2D receive cooperation. We analyze the sum rate achieved by a device pair in the proposed method and identify the strategies to improve the sum rate of the device pair. We next present a distributed algorithm and its equivalent algorithm for device pairing to maximize the throughput of the multidevice network. Simulation results confirm the advantages of the transmit CSI and D2D cooperation as well as the validity of the distributive algorithm.

  3. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  4. Grids, virtualization, and clouds at Fermilab

    International Nuclear Information System (INIS)

    Timm, S; Chadwick, K; Garzoglio, G; Noh, S

    2014-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  5. Grids, virtualization, and clouds at Fermilab

    Science.gov (United States)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  6. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  7. Virtual and Augmented Reality on the 5G Highway

    OpenAIRE

    Orlosky, Jason; Kiyokawa, Kiyoshi; Takemura, Haruo

    2017-01-01

    In recent years, virtual and augmented reality have begun to take advantage of the high speed capabilities of data streaming technologies and wireless networks. However, limitations like bandwidth and latency still prevent us from achieving high fidelity telepresence and collaborative virtual and augmented reality applications. Fortunately, both researchers and engineers are aware of these problems and have set out to design 5G networks to help us to move to the next generation of virtual int...

  8. IMPLEMENTASI VIRTUAL PRIVATE NETWORK - WAN DALAM DUNIA BISNIS

    Directory of Open Access Journals (Sweden)

    Erma Suryani

    2007-01-01

    Full Text Available Dalam dunia bisnis, biasanya sebuah organisasi ingin membangun Wide Area Network (WAN untuk menghubungkan beberapa kantor cabangnya. Sebelum munculnya Virtual Private Network (VPN, mereka umumnya menggunakan ” leased line” yang mahal sehingga hanya perusahaan besar yang dapat memilikinya.VPN - WAN memberi solusi alternatif karena dapat mengurangi biaya pembuatan infrastruktur jaringan dan memotong biaya operasional dengan memanfaatkan failitas internet sebagai media komunikasinya. Perusahaan cukup menghubungi  Internet Service Provider (ISP terdekat untuk mendapatkan layanan ini.Setiap paket informasi yang dikirim dapat diakses, diawasi atau bahkan dimanipulasi oleh pengguna. Supaya komunikasi berjalan aman maka diperlukan protokol tambahan khusus yang dirancang untuk mengamankan data yang dikirim.  Dewasa ini sudah banyak perusahaan seperti : perusahaan manufaktur, distribusi dan retail; pertambangan minyak dan gas, telekomunikasi, finansial, pemerintahan serta industri transportasi yang menggunakan VPN karena fasilitas –fasilitas yang ditawarkan berupa remote access client, internetworking LAN to LAN serta akses yang terkontrol dengan biaya yang murah.  Uji coba yang dilakukan Miercom(LAB penyedia testing kinerja perangkat keras terhadap  Cisco 1841 membuktikan bahwa Cisco 1841 dapat  menopang suatu komunikasi dua arah, interkoneksi IP WAN kapasitas E1 dengan enkripsi 3DES yang dapat menunjang throughput sampai dengan 2 Mbps dalam koneksi E1 IP-WAN. Penggunaan  VPN akan meningkatkan efektivitas, efisiensi kerja serta skalabilitas perusahaan. Keuntungan lain yang didapat dari VPN adalah pada biaya pulsa yang jauh lebih murah dibandingkan dengan menggunakan” leased line”.Kata Kunci: VPN, WAN,  paket informasi, ISP,  remote access client, skalabilitas.

  9. VRML and Collaborative Environments: New Tools for Networked Visualization

    Science.gov (United States)

    Crutcher, R. M.; Plante, R. L.; Rajlich, P.

    We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.

  10. Agreements in Virtual Organizations

    Science.gov (United States)

    Pankowska, Malgorzata

    This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.

  11. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  12. Raising Virtual Laboratories in Australia onto global platforms

    Science.gov (United States)

    Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.

    2016-12-01

    Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a

  13. Creating a virtual network of communication of information in view on the regime of information

    OpenAIRE

    Luiz Antonio Dias Leal; Isa Freire; Rosali Fernandez de Souza

    2013-01-01

    Presents the results of research that uses the concept of 'information system' Gonzalez Gomez to identify elements and actors within the domain of a virtual network of information communication. The research was conducted under the Program Good Agricultural Practices - Beef Cattle at the Brazilian Agricultural Research Corporation - EMBRAPA, which aims to make systems for beef cattle production more profitable and competitive, ensuring the supply of safe food, from of sustainable production s...

  14. Virtual corporations, enterprise and organisation

    Directory of Open Access Journals (Sweden)

    Carmen RÃDUT

    2009-06-01

    Full Text Available Virtual organisation is a strategic paradigm that is centred on the use of information and ICT to create value. Virtual organisation is presented as a metamanagement strategy that has application in all value oriented organisations. Within the concept of Virtual organisation, the business model is an ICT based construct that bridges and integrates enterprise strategic and operational concerns. Firms try to ameliorate the impacts of risk and product complexity by forming alliances and partnerships with others to spread the risk of new products and new ventures and to increase organisational competence. The result is a networked virtual organization.

  15. Simulation analysis of security performance of DPSKOCDMA network via virtual user scheme

    Directory of Open Access Journals (Sweden)

    Vishav Jyoti

    2012-07-01

    Full Text Available A novel technique to enhance the security of an optical code division multipleaccess (OCDMA system against eavesdropping is proposed. It has been observed that whena single user is active in the network, an eavesdropper can easily sift the data beingtransmitted without decoding. To increase the security, a virtual user scheme is proposed andsimulated on a differential phase shift keying (DPSK OCDMA system. By using the virtualuser scheme, the security of the DPSK-OCDMA system can be effectively improved and themultiple access interference, which is generally considered to be a limitation of the OCDMAsystem, is used to increase the confidentiality of the system.

  16. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    Science.gov (United States)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  17. Virtual private networks application in Nuclear Regulatory Authority of Argentina

    International Nuclear Information System (INIS)

    Glidewell, Donnie D.; Smartt, Heidi A.; Caskey, Susan A.; Bonino, Anibal D.; Perez, Adrian C.; Pardo, German R.; Vigile, Rodolfo S.; Krimer, Mario

    2004-01-01

    As the result of the existence of several regional delegations all over the country, a requirement was made to conform a secure data interchange structure. This would make possible the interconnection of these facilities and their communication with the Autoridad Regulatoria Nuclear (ARN) headquarters. The records these parts exchange are often of classified nature, including sensitive data by the local safeguards inspectors. On the other hand, the establishment of this network should simplify the access of authorized nuclear and radioactive materials users to the ARN databases, from remote sites and with significant trust levels. These requirements called for a network that should be not only private but also secure, providing data centralization and integrity assurance with a strict user control. The first proposal was to implement a point to point link between the installations. This proposal was deemed as economically not viable, and it had the disadvantage of not being easily reconfigurable. The availability of new technologies, and the accomplishment of the Action Sheet 11 under an agreement between Argentine Nuclear Regulatory Authority and the United States Department of Energy (DOE), opened a new path towards the resolution of this problem. By application of updated tunneling security protocols it was possible to project a manageable and secure network through the use of Virtual Private Networking (VPN) hardware. A first trial installation of this technology was implemented between ARN headquarters at Buenos Aires and the Southern Region Office at Bariloche, Argentina. This private net is at the moment under test, and it is planned to expand to more sites in this country, reaching for example to nuclear power plants. The Bariloche installation had some interesting peculiarities. The solutions proposed to them revealed to be very useful during the development of the network expansion plans, as they showed how to adapt the VPN technical requisites to the

  18. Virtual Victorians networks, connections, technologies

    CERN Document Server

    Alfano, Veronica

    2016-01-01

    Exploring how scholars use digital resources to reconstruct the 19th century, this volume probes key issues in the intersection of digital humanities and history. Part I examines the potential of online research tools for literary scholarship while Part II outlines a prehistory of digital virtuality by exploring specific Victorian cultural forms.

  19. Practical application of game theory based production flow planning method in virtual manufacturing networks

    Science.gov (United States)

    Olender, M.; Krenczyk, D.

    2016-08-01

    Modern enterprises have to react quickly to dynamic changes in the market, due to changing customer requirements and expectations. One of the key area of production management, that must continuously evolve by searching for new methods and tools for increasing the efficiency of manufacturing systems is the area of production flow planning and control. These aspects are closely connected with the ability to implement the concept of Virtual Enterprises (VE) and Virtual Manufacturing Network (VMN) in which integrated infrastructure of flexible resources are created. In the proposed approach, the players role perform the objects associated with the objective functions, allowing to solve the multiobjective production flow planning problems based on the game theory, which is based on the theory of the strategic situation. For defined production system and production order models ways of solving the problem of production route planning in VMN on computational examples for different variants of production flow is presented. Possible decision strategy to use together with an analysis of calculation results is shown.

  20. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  1. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  2. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  3. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  4. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  5. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  6. A Bilevel Scheduling Approach for Modeling Energy Transaction of Virtual Power Plants in Distribution Networks

    Directory of Open Access Journals (Sweden)

    F. Nazari

    2017-03-01

    Full Text Available By increasing the use of distributed generation (DG in the distribution network operation, an entity called virtual power plant (VPP has been introduced to control, dispatch and aggregate the generation of DGs, enabling them to participate either in the electricity market or the distribution network operation. The participation of VPPs in the electricity market has made challenges to fairly allocate payments and benefits between VPPs and distribution network operator (DNO. This paper presents a bilevel scheduling approach to model the energy transaction between VPPs and DNO.  The upper level corresponds to the decision making of VPPs which bid their long- term contract prices so that their own profits are maximized and the lower level represents the DNO decision making to supply electricity demand of the network by minimizing its overall cost. The proposed bilevel scheduling approach is transformed to a single level optimizing problem using its Karush-Kuhn-Tucker (KKT optimality conditions. Several scenarios are applied to scrutinize the effectiveness and usefulness of the proposed model. 

  7. Virtual Distances Used for Optimization of Applicationsin the Pervasive Computing Domain

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye

    2004-01-01

    This paper presents the notion of virtual distances -- communication proximity -- to describe the quality of a connection between two devices. We use virtual distances as the baisis of optimizations performed by a virtual machine where a part of an application can be moved to another device if th...... advantage of temporarily available resources at the current local area network or through ad-hoc networks...

  8. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    Science.gov (United States)

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  9. Researching virtual worlds methodologies for studying emergent practices

    CERN Document Server

    Phillips, Louise

    2013-01-01

    This volume presents a wide range of methodological strategies that are designed to take into account the complex, emergent, and continually shifting character of virtual worlds. It interrogates how virtual worlds emerge as objects of study through the development and application of various methodological strategies. Virtual worlds are not considered objects that exist as entities with fixed attributes independent of our continuous engagement with them and interpretation of them. Instead, they are conceived of as complex ensembles of technology, humans, symbols, discourses, and economic structures, ensembles that emerge in ongoing practices and specific situations. A broad spectrum of perspectives and methodologies is presented: Actor-Network-Theory and post-Actor-Network-Theory, performativity theory, ethnography, discourse analysis, Sense-Making Methodology, visual ethnography, multi-sited ethnography, and Social Network Analysis.

  10. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  11. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  12. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  13. Network and user interface for PAT DOME virtual motion environment system

    Science.gov (United States)

    Worthington, J. W.; Duncan, K. M.; Crosier, W. G.

    1993-01-01

    The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.

  14. Virtual Sensors in a Web 2.0 Digital Watershed

    Science.gov (United States)

    Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.

    2008-12-01

    The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.

  15. Cyberinfrastructure for high energy physics in Korea

    International Nuclear Information System (INIS)

    Cho, Kihyeon; Kim, Hyunwoo; Jeung, Minho

    2010-01-01

    We introduce the hierarchy of cyberinfrastructure which consists of infrastructure (supercomputing and networks), Grid, e-Science, community and physics from bottom layer to top layer. KISTI is the national headquarter of supercomputer, network, Grid and e-Science in Korea. Therefore, KISTI is the best place to for high energy physicists to use cyberinfrastructure. We explain this concept on the CDF and the ALICE experiments. In the meantime, the goal of e-Science is to study high energy physics anytime and anywhere even if we are not on-site of accelerator laboratories. The components are data production, data processing and data analysis. The data production is to take both on-line and off-line shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment such as EVO (Enabling Virtual Organization) system. We also present the global community activities of FKPPL (France-Korea Particle Physics Laboratory) and physics as top layer.

  16. On the Role of Hyper-arid Regions within the Virtual Water Trade Network

    Science.gov (United States)

    Aggrey, James; Alshamsi, Aamena; Molini, Annalisa

    2016-04-01

    Climate change, economic development, and population growth are bound to increasingly impact global water resources, posing a significant threat to the sustainable development of arid regions, where water consumption highly exceeds the natural carrying capacity, population growth rate is high, and climate variability is going to impact both water consumption and availability. Virtual Water Trade (VWT) - i.e. the international trade network of water-intensive products - has been proposed as a possible solution to optimize the allocation of water resources on the global scale. By increasing food availability and lowering food prices it may in fact help the rapid development of water-scarce regions. The structure of the VWT network has been analyzed by a number of authors both in connection with trade policies, socioeconomic constrains and agricultural efficiency. However a systematic analysis of the structure and the dynamics of the VWT network conditional to aridity, climatic forcing and energy availability, is still missing. Our goal is hence to analyze the role of arid and hyper-arid regions within the VWN under diverse climatic, demographic, and energy constraints with an aim to contribute to the ongoing Energy-Water-Food nexus discussion. In particular, we focus on the hyper-arid lands of the Arabian Peninsula, the role they play in the global network and the assessment of their specific criticalities, as reflected in the VWN resilience.

  17. QoS-Aware Resource Allocation for Network Virtualization in an Integrated Train Ground Communication System

    Directory of Open Access Journals (Sweden)

    Li Zhu

    2018-01-01

    Full Text Available Urban rail transit plays an increasingly important role in urbanization processes. Communications-Based Train Control (CBTC Systems, Passenger Information Systems (PIS, and Closed Circuit Television (CCTV are key applications of urban rail transit to ensure its normal operation. In existing urban rail transit systems, different applications are deployed with independent train ground communication systems. When the train ground communication systems are built repeatedly, limited wireless spectrum will be wasted, and the maintenance work will also become complicated. In this paper, we design a network virtualization based integrated train ground communication system, in which all the applications in urban rail transit can share the same physical infrastructure. In order to better satisfy the Quality of Service (QoS requirement of each application, this paper proposes a virtual resource allocation algorithm based on QoS guarantee, base station load balance, and application station fairness. Moreover, with the latest achievement of distributed convex optimization, we exploit a novel distributed optimization method based on alternating direction method of multipliers (ADMM to solve the virtual resource allocation problem. Extensive simulation results indicate that the QoS of the designed integrated train ground communication system can be improved significantly using the proposed algorithm.

  18. Neuronal correlates of a virtual-reality-based passive sensory P300 network.

    Science.gov (United States)

    Chen, Chun-Chuan; Syue, Kai-Syun; Li, Kai-Chiun; Yeh, Shih-Ching

    2014-01-01

    P300, a positive event-related potential (ERP) evoked at around 300 ms after stimulus, can be elicited using an active or passive oddball paradigm. Active P300 requires a person's intentional response, whereas passive P300 does not require an intentional response. Passive P300 has been used in incommunicative patients for consciousness detection and brain computer interface. Active and passive P300 differ in amplitude, but not in latency or scalp distribution. However, no study has addressed the mechanism underlying the production of passive P300. In particular, it remains unclear whether the passive P300 shares an identical active P300 generating network architecture when no response is required. This study aims to explore the hierarchical network of passive sensory P300 production using dynamic causal modelling (DCM) for ERP and a novel virtual reality (VR)-based passive oddball paradigm. Moreover, we investigated the causal relationship of this passive P300 network and the changes in connection strength to address the possible functional roles. A classical ERP analysis was performed to verify that the proposed VR-based game can reliably elicit passive P300. The DCM results suggested that the passive and active P300 share the same parietal-frontal neural network for attentional control and, underlying the passive network, the feed-forward modulation is stronger than the feed-back one. The functional role of this forward modulation may indicate the delivery of sensory information, automatic detection of differences, and stimulus-driven attentional processes involved in performing this passive task. To our best knowledge, this is the first study to address the passive P300 network. The results of this study may provide a reference for future clinical studies on addressing the network alternations under pathological states of incommunicative patients. However, caution is required when comparing patients' analytic results with this study. For example, the task

  19. Neuronal correlates of a virtual-reality-based passive sensory P300 network.

    Directory of Open Access Journals (Sweden)

    Chun-Chuan Chen

    Full Text Available P300, a positive event-related potential (ERP evoked at around 300 ms after stimulus, can be elicited using an active or passive oddball paradigm. Active P300 requires a person's intentional response, whereas passive P300 does not require an intentional response. Passive P300 has been used in incommunicative patients for consciousness detection and brain computer interface. Active and passive P300 differ in amplitude, but not in latency or scalp distribution. However, no study has addressed the mechanism underlying the production of passive P300. In particular, it remains unclear whether the passive P300 shares an identical active P300 generating network architecture when no response is required. This study aims to explore the hierarchical network of passive sensory P300 production using dynamic causal modelling (DCM for ERP and a novel virtual reality (VR-based passive oddball paradigm. Moreover, we investigated the causal relationship of this passive P300 network and the changes in connection strength to address the possible functional roles. A classical ERP analysis was performed to verify that the proposed VR-based game can reliably elicit passive P300. The DCM results suggested that the passive and active P300 share the same parietal-frontal neural network for attentional control and, underlying the passive network, the feed-forward modulation is stronger than the feed-back one. The functional role of this forward modulation may indicate the delivery of sensory information, automatic detection of differences, and stimulus-driven attentional processes involved in performing this passive task. To our best knowledge, this is the first study to address the passive P300 network. The results of this study may provide a reference for future clinical studies on addressing the network alternations under pathological states of incommunicative patients. However, caution is required when comparing patients' analytic results with this study. For example

  20. Innovation in Virtual Networks

    DEFF Research Database (Denmark)

    Hu, Yimei; Sørensen, Olav Jull

    2011-01-01

    The purpose of this article is to explore and highlight the particular innovation characteristics and modes of the chinese game industry from a networking perspective......The purpose of this article is to explore and highlight the particular innovation characteristics and modes of the chinese game industry from a networking perspective...

  1. Building Modelling Methodologies for Virtual District Heating and Cooling Networks

    Energy Technology Data Exchange (ETDEWEB)

    Saurav, Kumar; Choudhury, Anamitra R.; Chandan, Vikas; Lingman, Peter; Linder, Nicklas

    2017-10-26

    District heating and cooling systems (DHC) are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increase the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components interacting with each other. In this paper we present two building methodologies to model the consumer buildings. These models will be further integrated with network model and the control system layer to create a virtual test bed for the entire DHC system. The model is validated using data collected from a real life DHC system located at Lulea, a city on the coast of northern Sweden. The test bed will be then used for simulating various test cases such as peak energy reduction, overall demand reduction etc.

  2. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  3. Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks

    NARCIS (Netherlands)

    L.P. Slazynski (Leszek); S.M. Bohte (Sander)

    2012-01-01

    htmlabstractThe arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facil- ities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of

  4. Utilizing ICN/CCN for service and VM migration support in virtualized LTE systems

    NARCIS (Netherlands)

    Karimzadeh Motallebi Azar, Morteza; Satria, Triadimas; Karagiannis, Georgios

    2014-01-01

    One of the most important concepts used in mobile networks, like LTE (Long Term Evolution) is service continuity. A mobile user moving from one network to another network should not lose an on-going service. In cloud-based (virtualized) LTE systems, services are hosted on Virtual Machines (VMs) that

  5. Empathic Cyberactivism: The Potential of Hyperconnected Social Media Networks and Empathic Virtual Reality for Feminism

    Directory of Open Access Journals (Sweden)

    Penelope Kemekenidou

    2016-10-01

    Full Text Available The rise of misogyny on social networks feels both devastating and endless. Whether one believes that misogyny has risen to a new level, or that it has simply become more visible through the internet, one thing is clear: with the ubiquity and accessibility of ‘immortal’ online information, harassment and discrimination, shared via hyperconnected social media networks, can be taken to a new, much more visible level. Hyperconnectivity enables sexism to multiply on the web – but it can also be the solution to fight it. In the context of activism, hyperconnectivity can be a major force to combat inequality—given that this hyperconnectivity is linked to empathy and not aggression. If this is the case, I argue, new technologies, for example virtual reality, open up new spaces of empathetic interaction.

  6. THE INTEREST OF GEOGRAPHICAL INFORMATION, ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY FOR THE UNDERGROUND NETWORK REPRESENTATION

    Directory of Open Access Journals (Sweden)

    M. Lacroix

    2016-01-01

    Full Text Available Two years ago, 63 people died and more than 150 were seriously injured in Beijing (China because of damage to a hydrocarbon pipeline. Urban networks are invisible because usually buried between 1 and 1,5 meters underground. They should be identified to prevent such accidents which involve workers as well as the public. Rural and urban districts, network concessionaries and contractors: everyone could benefit from their networks becoming safer. To prevent such accidents and protect workers and the public as well, some new regulations propose to identify and secure the buried networks. That’s why it is important to develop a software which deals with the risk management process and also about the risk visualization. This work is structured around three major sections:– the utility of the Geographical Information to determine the minimal distances and the topological relations between the networks themselves, and also with the other element in their vicinity;– the use of some Artificial Intelligence tools, and more particularly of Expert System, to take the current regulation into account and determine the accident risk probability;– the contribution of virtual reality to perceive the underground world.

  7. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  8. Interpretations of virtual reality.

    Science.gov (United States)

    Voiskounsky, Alexander

    2011-01-01

    University students were surveyed to learn what they know about virtual realities. The two studies were administered with a half-year interval in which the students (N=90, specializing either in mathematics and science, or in social science and humanities) were asked to name particular examples of virtual realities. The second, but not the first study, was administered after the participants had the chance to see the movie "Avatar" (no investigation was held into whether they really saw it). While the students in both studies widely believed that activities such as social networking and online gaming represent virtual realities, some other examples provided by the students in the two studies differ: in the second study the participants expressed a better understanding of the items related to virtual realities. At the same time, not a single participant reported particular psychological states (either regular or altered) as examples of virtual realities. Profound popularization efforts need to be done to acquaint the public, including college students, with virtual realities and let the public adequately understand how such systems work.

  9. Virtual learning networks for sustainable development

    NARCIS (Netherlands)

    De Kraker, Joop; Cörvers, Ron

    2010-01-01

    Sustainable development is a participatory, multi-actor process. In this process, learning plays a major role as participants have to exchange and integrate a diversity of perspectives and types of knowledge and expertise in order to arrive at innovative, jointly supported solutions. Virtual

  10. A study on haptic collaborative game in shared virtual environment

    Science.gov (United States)

    Lu, Keke; Liu, Guanyang; Liu, Lingzhi

    2013-03-01

    A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.

  11. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  12. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  13. Instant Hyper-v Server Virtualization starter

    CERN Document Server

    Eguibar, Vicente Rodriguez

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service

  14. Optical network democratization.

    Science.gov (United States)

    Nejabati, Reza; Peng, Shuping; Simeonidou, Dimitra

    2016-03-06

    The current Internet infrastructure is not able to support independent evolution and innovation at physical and network layer functionalities, protocols and services, while at same time supporting the increasing bandwidth demands of evolving and heterogeneous applications. This paper addresses this problem by proposing a completely democratized optical network infrastructure. It introduces the novel concepts of the optical white box and bare metal optical switch as key technology enablers for democratizing optical networks. These are programmable optical switches whose hardware is loosely connected internally and is completely separated from their control software. To alleviate their complexity, a multi-dimensional abstraction mechanism using software-defined network technology is proposed. It creates a universal model of the proposed switches without exposing their technological details. It also enables a conventional network programmer to develop network applications for control of the optical network without specific technical knowledge of the physical layer. Furthermore, a novel optical network virtualization mechanism is proposed, enabling the composition and operation of multiple coexisting and application-specific virtual optical networks sharing the same physical infrastructure. Finally, the optical white box and the abstraction mechanism are experimentally evaluated, while the virtualization mechanism is evaluated with simulation. © 2016 The Author(s).

  15. Virtual Private Lan Services Over IP/MPLS Networks and Router Configurations

    Directory of Open Access Journals (Sweden)

    Pınar KIRCI

    2015-06-01

    Full Text Available The rising number of users and ever growing traffic rates over the networks reveal the need of higher bandwidth and transmission rates. At every packet transmission process, routers need to route the packets by looking at the routing tables, this fact leads to an increase at the load of the routers and at the amount of time consumed during the processes. Today, users need high level security, faster data transmission and easy managed network structures because of the increasing technology usage. MPLS network structures can provide these requirements with their QoS feature. In our work, at first a topology structure is constructed with the routers that are used in Alcatel-Lucent laboratories. OSPF (Open Shortest Path First routing protocol and MPLS (Multiprotocol Label Switching technologies are used over the topology. Afterwards, E-pipe (Ethernet Pipe and VPLS (Virtual private LAN services configurations are performed over the routers. To illustrate the current network data traffic, three tests are performed in the study. Routers’ configurations are performed by Secure-CRT and still developing i-Gen software. With i-Gen, many routers’ configurations can be performed with a user friendly interface. Instead of performing the configurations one by one with Secure CRT, the user can perform the routers’ configurations easily by entering the needed values for the system with i-Gen software. So, with the new and developing i-Gen software, the users’ workload is minimized and streamlined. In our work, Secure-CRT software which is mostly preferred for router configurations at Windows operating system and i-Gen software which is developed by Alcatel-Lucent are considered. Both of the router configuration softwares are worked on and gained results are expounded. Consequently, instead of using time consuming Secure-CRT software, with utilizing new developed i-Gen software, the users’ work load is minimized.

  16. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  17. Design and implementation of dynamic hybrid Honeypot network

    Science.gov (United States)

    Qiao, Peili; Hu, Shan-Shan; Zhai, Ji-Qiang

    2013-05-01

    The method of constructing a dynamic and self-adaptive virtual network is suggested to puzzle adversaries, delay and divert attacks, exhaust attacker resources and collect attacking information. The concepts of Honeypot and Honeyd, which is the frame of virtual Honeypot are introduced. The techniques of network scanning including active fingerprint recognition are analyzed. Dynamic virtual network system is designed and implemented. A virtual network similar to real network topology is built according to the collected messages from real environments in this system. By doing this, the system can perplex the attackers when Hackers attack and can further analyze and research the attacks. The tests to this system prove that this design can successfully simulate real network environment and can be used in network security analysis.

  18. Reference models supporting enterprise networks and virtual enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Bernus, Peter

    2003-01-01

    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...... the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project....

  19. Green Virtualization for Multiple Collaborative Cellular Operators

    KAUST Repository

    Farooq, Muhammad Junaid

    2017-06-05

    This paper proposes and investigates a green virtualization framework for infrastructure sharing among multiple cellular operators whose networks are powered by a combination of conventional and renewable sources of energy. Under the proposed framework, the virtual network formed by unifying radio access infrastructures of all operators is optimized for minimum energy consumption by deactivating base stations (BSs) with low traffic loads. The users initially associated to those BSs are off-loaded to neighboring active ones. A fairness criterion for collaboration based on roaming prices is introduced to cover the additional energy costs incurred by host operators. The framework also ensures that any collaborating operator is not negatively affected by its participation in the proposed virtualization. A multi-objective linear programming problem is formulated to achieve energy and cost efficiency of the networks\\' operation by identifying the set of inter-operator roaming prices. For the case when collaboration among all operators is infeasible due to profitability, capacity, or power constraints, an iterative algorithm is proposed to determine the groups of operators that can viably collaborate. Results show significant energy savings using the proposed virtualization as compared to the standalone case. Moreover, collaborative operators exploiting locally generated renewable energy are rewarded more than traditional ones.

  20. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  1. Virtual reality for spherical images

    Science.gov (United States)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  2. Local and global perspectives on the virtual water trade

    Directory of Open Access Journals (Sweden)

    S. Tamea

    2013-03-01

    Full Text Available Recent studies on fluxes of virtual water are showing how the global food and goods trade interconnects the water resources of different and distant countries, conditioning the local water balances. This paper presents and discusses the assessment of virtual water fluxes between a single country and its network of trading partners, delineating a country's virtual water budget in space and time (years 1986–2010. The fluxes between the country under study and its importing/exporting partners are visualized with a geographical representation shaping the trade network as a virtual river/delta. Time variations of exchanged fluxes are quantified to show possible trends in the virtual water balance, while characterizing the time evolution of the trade network and its composition in terms of product categories (plant-based, animal-based, luxury food, and non-edible. The average distance traveled by virtual water to arrive to the place of consumption is also introduced as a new measure for the analysis of globalization of the virtual water trade. Using Italy as an example, we find that food trade has a steadily growing importance compared to domestic production, with a major component represented by plant-based products, and luxury products taking an increasingly larger share (26% in 2010. In 2010 Italy had an average net import of 55 km3 of virtual water (38 km3 in 1986, a value which poses the country among the top net importers in the world. On average each cubic meter of virtual water travels nearly 4000 km before entering Italy, while export goes to relatively closer countries (average distance: 2600 km, with increasing trends in time which are almost unique among the world countries. Analyses proposed for Italy are replicated for 10 other world countries, triggering similar investigations on different socio-economic actualities.

  3. Local and global perspectives on the virtual water trade

    Science.gov (United States)

    Tamea, S.; Allamano, P.; Carr, J. A.; Claps, P.; Laio, F.; Ridolfi, L.

    2013-03-01

    Recent studies on fluxes of virtual water are showing how the global food and goods trade interconnects the water resources of different and distant countries, conditioning the local water balances. This paper presents and discusses the assessment of virtual water fluxes between a single country and its network of trading partners, delineating a country's virtual water budget in space and time (years 1986-2010). The fluxes between the country under study and its importing/exporting partners are visualized with a geographical representation shaping the trade network as a virtual river/delta. Time variations of exchanged fluxes are quantified to show possible trends in the virtual water balance, while characterizing the time evolution of the trade network and its composition in terms of product categories (plant-based, animal-based, luxury food, and non-edible). The average distance traveled by virtual water to arrive to the place of consumption is also introduced as a new measure for the analysis of globalization of the virtual water trade. Using Italy as an example, we find that food trade has a steadily growing importance compared to domestic production, with a major component represented by plant-based products, and luxury products taking an increasingly larger share (26% in 2010). In 2010 Italy had an average net import of 55 km3 of virtual water (38 km3 in 1986), a value which poses the country among the top net importers in the world. On average each cubic meter of virtual water travels nearly 4000 km before entering Italy, while export goes to relatively closer countries (average distance: 2600 km), with increasing trends in time which are almost unique among the world countries. Analyses proposed for Italy are replicated for 10 other world countries, triggering similar investigations on different socio-economic actualities.

  4. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  5. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pais Pitta de Lacerda Ruivo, Tiago [IIT, Chicago; Bernabeu Altayo, Gerard [Fermilab; Garzoglio, Gabriele [Fermilab; Timm, Steven [Fermilab; Kim, Hyun-Woo [Fermilab; Noh, Seo-Young [KISTI, Daejeon; Raicu, Ioan [IIT, Chicago

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  6. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  7. When should virtual cybercrime be brought under the scope of the criminal law?

    NARCIS (Netherlands)

    Strikwerda, Litska; Rogers, Marcus; Seigfried-Spellar, Kathryn C.

    2012-01-01

    This paper is about the question when virtual cybercrime should be brought under the scope of the criminal law. By virtual cybercrime I mean crime that involves a specific aspect of computers or computer networks: virtuality. Examples of virtual cybercrime are: virtual child pornography, theft of

  8. Demonstration of Supervisory Control and Data Acquisition (SCADA) Virtualization Capability in the US Army Research Laboratory (ARL)/Sustaining Base Network Assurance Branch (SBNAB) US Army Cyber Analytics Laboratory (ACAL) SCADA Hardware Testbed

    Science.gov (United States)

    2015-05-01

    application ,1 while the simulated PLC software is the open source ModbusPal Java application . When queried using the Modbus TCP protocol, ModbusPal reports...and programmable logic controller ( PLC ) components. The HMI and PLC components were instantiated with software and installed in multiple virtual...creating and capturing HMI– PLC network traffic over a 24-h period in the virtualized network and inspect the packets for errors.  Test the

  9. Introduction of Virtualization Technology to Multi-Process Model Checking

    Science.gov (United States)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  10. SCM: A method to improve network service layout efficiency with network evolution.

    Science.gov (United States)

    Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.

  11. Tools for building virtual laboratories

    International Nuclear Information System (INIS)

    Agarwal, Debora; Johnston, William E.; Loken, Stewart; Tierney, Brian

    1996-01-01

    There is increasing interest in making unique research facilities facilities accessible on the Internet. Computer systems, scientific databases and experimental apparatus can be used by international collaborations of scientists using high-speed networks and advanced software tools to support collaboration. We are building tools including video conferencing and electronic white boards that are being used to create examples of virtual laboratories. This paper describes two pilot projects which provide testbeds for the tools. The first is a virtual laboratory project providing remote access to LBNL's Advanced Light Source. The second is the Multidimensional Applications and Gigabit internet work Consortium (MAGIC) testbed which has been established to develop a very high-speed, wide-are network to deliver realtime data at gigabit-per-second rates. (author)

  12. THE INTEGRATION OF CREATIVITY MANAGEMENT MODELS INTO UNIVERSITIES’VIRTUAL LEARNING COMMUNITIES

    Directory of Open Access Journals (Sweden)

    Alexandru STRUNGĂ

    2014-12-01

    Full Text Available Given the access of an increasingly higher number of individuals to virtual learning networks, the issue of creativity management becomes extremely important, especially for schools and universities. In the specialized literature, participating in virtual learning communities has several advantages, including: permanent access to information, high educational performance and increased creativity, and also better-developed professional identity (North and Kumta, 2014; Boulay and van Raalte, 2014. In the Romanian literature, there are few studies that aim directly at the relationship between the participation in virtual learning networks and creativity and innovation management models, especially in higher education institutions. This paper aims to study the ways in which creativity and innovation management models can be used in virtual learning networks in order to achieve better productivity at both individual and organizational levels, taking into account several best practices from this field and their possible implementation in Romanian educational institutions.

  13. Establishment and preliminary application of the interactive tele-radiologic conference system based on virtual private network

    International Nuclear Information System (INIS)

    Wang Xuejian; Hu Jian; Wang Kang; Yu Hui; Luo Min; Lei Wenyong

    2005-01-01

    Objective: To investigate the establishment and characteristics of the interactive tele-radiological system (IATRS). Methods: Local area network (LAN) of local hospitals with firewall and ADSL Modem was connected into internet, then connected into the Virtual Private Network (VPN) server of the affiliated hospital of Guiyang Medical College (GMCAH) through anti-firewall of GMCAH. The VPN tunnel was acquired and LAN of local hospitals was connected into the PACS server of GMCAH, resulting in sharing of radiological data by both the GMCAH and local hospitals. Results: Radiological data from local hospitals could be transmitted by the PACS server of GMCAH safety and rapidly. The IATRS could provide high-quality images with high-speed, and with ease to perform. Conclusion: IATRS is useful and reliable for transmitting radiological data between remote places. (authors)

  14. Scientific Assistant Virtual Laboratory (SAVL)

    Science.gov (United States)

    Alaghband, Gita; Fardi, Hamid; Gnabasik, David

    2007-03-01

    The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.

  15. The Effect of Virtual versus Traditional Learning in Achieving Competency-Based Skills

    Science.gov (United States)

    Mosalanejad, Leili; Shahsavari, Sakine; Sobhanian, Saeed; Dastpak, Mehdi

    2012-01-01

    Background: By rapid developing of the network technology, the internet-based learning methods are substituting the traditional classrooms making them expand to the virtual network learning environment. The purpose of this study was to determine the effectiveness of virtual systems on competency-based skills of first-year nursing students.…

  16. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  17. A Drone Remote Sensing for Virtual Reality Simulation System for Forest Fires: Semantic Neural Network Approach

    Science.gov (United States)

    Narasimha Rao, Gudikandhula; Jagadeeswara Rao, Peddada; Duvvuru, Rajesh

    2016-09-01

    Wild fires have significant impact on atmosphere and lives. The demand of predicting exact fire area in forest may help fire management team by using drone as a robot. These are flexible, inexpensive and elevated-motion remote sensing systems that use drones as platforms are important for substantial data gaps and supplementing the capabilities of manned aircraft and satellite remote sensing systems. In addition, powerful computational tools are essential for predicting certain burned area in the duration of a forest fire. The reason of this study is to built up a smart system based on semantic neural networking for the forecast of burned areas. The usage of virtual reality simulator is used to support the instruction process of fire fighters and all users for saving of surrounded wild lives by using a naive method Semantic Neural Network System (SNNS). Semantics are valuable initially to have a enhanced representation of the burned area prediction and better alteration of simulation situation to the users. In meticulous, consequences obtained with geometric semantic neural networking is extensively superior to other methods. This learning suggests that deeper investigation of neural networking in the field of forest fires prediction could be productive.

  18. Multilingual Writing and Pedagogical Cooperation in Virtual Learning Environments

    DEFF Research Database (Denmark)

    Mousten, Birthe; Vandepitte, Sonia; Arnó Macà, Elisabet

    Multilingual Writing and Pedagogical Cooperation in Virtual Learning Environments is a critical scholarly resource that examines experiences with virtual networks and their advantages for universities and students in the domains of writing, translation, and usability testing. Featuring coverage o...

  19. Work in the virtual enterprise-creating identities, building trust, and sharing knowledge

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard; Wangel, Arne

    2006-01-01

    for an exploratory, sociotechnical research approach combining the dimensions of context, subject and action with the twin objectives of contributing to the enhancement of collaborative capabilities in virtual teams as well as improving the insights into the nature of virtual work....... of the network must be integrated across the barriers of missing face-to-face clues and cultural differences. The social integration of the virtual network involves the creation of identities of the participating nodes, the building of trust between them, and the sharing of tacit and explicit knowledge among...... them. The conventional organisation already doing well in these areas seems to have an edge when going virtual. The paper argues that the whole question of management and control must be reconsidered due to the particular circumstances in the ‘Network Society’. The paper outlines a suggestion...

  20. Virtual worlds: a new frontier for nurse education?

    Science.gov (United States)

    Green, Janet; Wyllie, Aileen; Jackson, Debra

    2014-01-01

    Virtual worlds have the potential to offer nursing students social networking and, learning, opportunities through the use of collaborative and immersive learning. If nursing educators, are to stay, abreast of contemporary learning opportunities an exploration of the potential benefits of, virtual, worlds and their possibilities is needed. Literature was sourced that explored virtual worlds, and their, use in education, but nursing education specifically. It is clear that immersive learning has, positive, benefits for nursing, however the best way to approach virtual reality in nursing education, has yet to, be ascertained.

  1. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  2. Mapping, Awareness, And Virtualization Network Administrator Training Tool Virtualization Module

    Science.gov (United States)

    2016-03-01

    bike rides. I learned so much from you all, and will miss each and every one of you. To Mr. John Gibson and Dr. Alan Shaffer: thank you for your...guidance, your long-leash approach to thesis advising, and for the laughs we shared during our meetings. I appreciate the enthusiasm you both had for the...Conclusion and Future Work. This chapter discusses the successes and limitations of the MAVNATT Virtualization Module’s prototype and identifies

  3. The Development of a Virtual Dinosaur Museum

    Science.gov (United States)

    Tarng, Wernhuar; Liou, Hsin-Hun

    2007-01-01

    The objective of this article is to study the network and virtual reality technologies for developing a virtual dinosaur museum, which provides a Web-learning environment for students of all ages and the general public to know more about dinosaurs. We first investigate the method for building the 3D dynamic models of dinosaurs, and then describe…

  4. Termite: Emulation Testbed for Encounter Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Bruno

    2015-08-01

    Full Text Available Cutting-edge mobile devices like smartphones and tablets are equipped with various infrastructureless wireless interfaces, such as WiFi Direct and Bluetooth. Such technologies allow for novel mobile applications that take advantage of casual encounters between co-located users. However, the need to mimic the behavior of real-world encounter networks makes testing and debugging of such applications hard tasks. We present Termite, an emulation testbed for encounter networks. Our system allows developers to run their applications on a virtual encounter network emulated by software. Developers can model arbitrary encounter networks and specify user interactions on the emulated virtual devices. To facilitate testing and debugging, developers can place breakpoints, inspect the runtime state of virtual nodes, and run experiments in a stepwise fashion. Termite defines its own Petri Net variant to model the dynamically changing topology and synthesize user interactions with virtual devices. The system is designed to efficiently multiplex an underlying emulation hosting infrastructure across multiple developers, and to support heterogeneous mobile platforms. Our current system implementation supports virtual Android devices communicating over WiFi Direct networks and runs on top of a local cloud infrastructure. We evaluated our system using emulator network traces, and found that Termite is expressive and performs well.

  5. Writing virtual environments for software visualization

    CERN Document Server

    Jeffery, Clinton

    2015-01-01

    This book describes the software for creating networked, 3D multi-user virtual environments that allow users to create and remotely share visualizations of program behavior. The authors cover the major features of collaborative virtual environments and how to program them in a very high level language, and show how visualization can enable important advances in our ability to understand and reduce the costs of maintaining software. The book also examines the application of popular game-like software technologies.   • Discusses the acquisition of program behavior data to be visualized • Demonstrates the integration of multiple 2D and 3D dynamic views within a 3Dscene • Presents the network messaging capabilities to share those visualizations

  6. The Virtual Desktop: Options and Challenges in Selecting a Secure Desktop Infrastructure Based on Virtualization

    Science.gov (United States)

    2011-10-01

    the virtual desktop environment still functions for the users associated with it. Users can access the virtual desktop through the local network and...technologie de virtualisation du poste de travail peut contribuer à combler les besoins de partage de l’information sécuritaire au sein du MDN. Le... virtualisation . Il englobe un aperçu de la virtualisation d’un poste de travail, y compris un examen approfondi de deux architectures différentes : le

  7. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  8. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  9. Consuming Social Networks: A Study on BeeTalk Network

    Directory of Open Access Journals (Sweden)

    Jamal Mohammadi

    Full Text Available BeeTalk is one of the most common social networks that have attracted many users during these years. As a whole, social networks are parts of everyday life nowadays and, especially among the new generation, have caused some basic alterations in the field of identity-formation, sense-making and the form and content of communication. This article is a research about BeeTalk users, their virtual interactions and experiences, and the feelings, pleasures, meanings and attitudes that they obtain through participating in the virtual world. This is a qualitative research. The sample is selected by way of theoretical sampling among the students of University of Kurdistan. Direct observation and semistructured interviews are used to gathering data, which are interpreted through grounded theory. The findings show that some contexts like “searching real interests in a non-real world” and “the representation of users’ voices in virtual space” have provided the space for participating in BeeTalk, and an intervening factor called “instant availability” has intensified this participation. Users’ participation in this social network has changed their social interaction in the real world and formed some new types of communication among them such as “representation of faked identities”, “experiencing ceremonial space” and “artificial literacy”. Moreover, this participation has some consequences like “virtual addiction” and “virtual collectivism” in users’ everyday life that effects their ways of providing meaning and identity in their social lives. It can be said that the result of user’s activity in this network is to begin a kind of simulated relation that has basic differences with relations in the real world. The experience of relation in this network lacks nobility, enrichment and animation, rather it is instant, artificial and without any potential to vitalization.

  10. Virtual reality training improves balance function.

    Science.gov (United States)

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-09-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function.

  11. Virtual reality training improves balance function

    Science.gov (United States)

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-01-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function. PMID:25368651

  12. Archeovirtual 2011: An evaluation approach to virtual museums

    NARCIS (Netherlands)

    Pescarin, S.; Pagano, A.; Wallergård, M.; Hupperetz, W.; Ray, C.; Guidi, G.; Addison, A.C.

    2012-01-01

    November 2011 saw the opening of the exhibition "Archeovirtual" organized by CNR ITABC - Virtual Heritage Lab - and V-MusT Network of Excellence, in Paestum, Italy, under the general direction of BMTA1. The event, that was part of a wider European project focus on virtual museums, turned to be a

  13. Developing Preceptors through Virtual Communities and Networks: Experiences from a Pilot Project.

    Science.gov (United States)

    Ackman, Margaret L; Romanick, Marcel

    2011-11-01

    Supporting preceptors is critical to the expansion of experiential learning opportunities for the pharmacy profession. Informal learning opportunities within communities of practitioners are important for hospital preceptors. However, such communities may be limited by geographic separation of preceptors from peers, faculty members, and supports within the pharmacy services department. To use computer-mediated conferencing to create a sense of community among preceptors, specifically by using this medium to provide initial development of and continuing support for preceptors, and to examine preceptors' satisfaction with this approach. Thirty-nine preceptors who had completed a day-long face-to-face preceptor development workshop and who were supervising students in 1 of 2 specific rotation blocks were invited to participate in the study. The pharmacists used computer-mediated conferencing to meet for virtual networking about specific topics. They met once before the student rotation to receive instructions about the technology and to discuss student orientation and scheduling, and 3 times during the student rotation for open discussion of specific topics. Evaluation and feedback were solicited by means of an electronic survey and virtual (i.e., computer-based) feedback sessions with an independent facilitator. The response rate was 66% (26/39) for the electronic survey, but only 15% (6/39) for the virtual feedback sessions. All of the respondents were experienced preceptors, but for 92% (22/24), this was their first experience with computer-mediated conferencing. Overall, the sessions had a positive reception, and participants found it useful to share information and experiences with other preceptors. The main challenges were related to the technology, perceived lack of support for their participation in the sessions, and inconvenience related to the timing of sessions. Computer-mediated conferencing allowed preceptors to learn from and to support each other

  14. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  15. Virtual IO controllers at J-PARC MR using Xen

    International Nuclear Information System (INIS)

    Kamikubota, N.; Yamada, S.; Yamamoto, N.; Iitsuka, T.; Motohashi, S.; Takagi, M.; Yoshida, S.; Nemoto, H.

    2012-01-01

    The control system for J-PARC accelerator complex has been developed based on the EPICS tool-kit. About 90 traditional ('real') VME-bus computers are used as EPICS IOCs (Input/Output Controller) in the control system for J-PARC MR (Main Ring). In 2010-2011, we demonstrated a 'virtual' IOC using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS IOC runs on a Xen virtual machine. EPICS records for oscilloscopes (network devices) are configured. Advantages of virtual IOC are discussed. In addition, future directions are discussed. Plan view of virtual IOCs for MR operation will be given. (authors)

  16. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  17. Beyond L$: values across the virtual and the real

    NARCIS (Netherlands)

    Hu, J.; Offermans, S.A.M.

    2009-01-01

    Virtual societies and virtual worlds are now patriotically a part of lives of many people, especially the younger generations who have been growing up with the internet and mobile networks. Negative influences such as internet addiction and aggressive behavior have drawn attentions from researchers.

  18. Cloud networking understanding cloud-based data center networks

    CERN Document Server

    Lee, Gary

    2014-01-01

    Cloud Networking: Understanding Cloud-Based Data Center Networks explains the evolution of established networking technologies into distributed, cloud-based networks. Starting with an overview of cloud technologies, the book explains how cloud data center networks leverage distributed systems for network virtualization, storage networking, and software-defined networking. The author offers insider perspective to key components that make a cloud network possible such as switch fabric technology and data center networking standards. The final chapters look ahead to developments in architectures

  19. Eavesdropping-aware routing and spectrum allocation based on multi-flow virtual concatenation for confidential information service in elastic optical networks

    Science.gov (United States)

    Bai, Wei; Yang, Hui; Yu, Ao; Xiao, Hongyun; He, Linkuan; Feng, Lei; Zhang, Jie

    2018-01-01

    The leakage of confidential information is one of important issues in the network security area. Elastic Optical Networks (EON) as a promising technology in the optical transport network is under threat from eavesdropping attacks. It is a great demand to support confidential information service (CIS) and design efficient security strategy against the eavesdropping attacks. In this paper, we propose a solution to cope with the eavesdropping attacks in routing and spectrum allocation. Firstly, we introduce probability theory to describe eavesdropping issue and achieve awareness of eavesdropping attacks. Then we propose an eavesdropping-aware routing and spectrum allocation (ES-RSA) algorithm to guarantee information security. For further improving security and network performance, we employ multi-flow virtual concatenation (MFVC) and propose an eavesdropping-aware MFVC-based secure routing and spectrum allocation (MES-RSA) algorithm. The presented simulation results show that the proposed two RSA algorithms can both achieve greater security against the eavesdropping attacks and MES-RSA can also improve the network performance efficiently.

  20. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference